00:00:00.000 Started by upstream project "autotest-per-patch" build number 132692 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.073 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.101 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.157 Using shallow fetch with depth 1 00:00:00.157 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.157 > git --version # timeout=10 00:00:00.207 > git --version # 'git version 2.39.2' 00:00:00.207 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.242 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.242 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.335 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.347 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.362 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.362 > git config core.sparsecheckout # timeout=10 00:00:07.373 > git read-tree -mu HEAD # timeout=10 00:00:07.391 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.414 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.414 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.534 [Pipeline] Start of Pipeline 00:00:07.546 [Pipeline] library 00:00:07.547 Loading library shm_lib@master 00:00:07.547 Library shm_lib@master is cached. Copying from home. 00:00:07.567 [Pipeline] node 00:00:07.574 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.576 [Pipeline] { 00:00:07.583 [Pipeline] catchError 00:00:07.584 [Pipeline] { 00:00:07.594 [Pipeline] wrap 00:00:07.599 [Pipeline] { 00:00:07.606 [Pipeline] stage 00:00:07.608 [Pipeline] { (Prologue) 00:00:07.880 [Pipeline] sh 00:00:08.169 + logger -p user.info -t JENKINS-CI 00:00:08.210 [Pipeline] echo 00:00:08.216 Node: CYP12 00:00:08.230 [Pipeline] sh 00:00:08.535 [Pipeline] setCustomBuildProperty 00:00:08.545 [Pipeline] echo 00:00:08.546 Cleanup processes 00:00:08.549 [Pipeline] sh 00:00:08.831 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.832 584878 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.847 [Pipeline] sh 00:00:09.137 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.137 ++ grep -v 'sudo pgrep' 00:00:09.137 ++ awk '{print $1}' 00:00:09.137 + sudo kill -9 00:00:09.137 + true 00:00:09.152 [Pipeline] cleanWs 00:00:09.162 [WS-CLEANUP] Deleting project workspace... 00:00:09.162 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.170 [WS-CLEANUP] done 00:00:09.175 [Pipeline] setCustomBuildProperty 00:00:09.190 [Pipeline] sh 00:00:09.473 + sudo git config --global --replace-all safe.directory '*' 00:00:09.565 [Pipeline] httpRequest 00:00:10.150 [Pipeline] echo 00:00:10.152 Sorcerer 10.211.164.20 is alive 00:00:10.162 [Pipeline] retry 00:00:10.163 [Pipeline] { 00:00:10.175 [Pipeline] httpRequest 00:00:10.179 HttpMethod: GET 00:00:10.179 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.180 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.188 Response Code: HTTP/1.1 200 OK 00:00:10.188 Success: Status code 200 is in the accepted range: 200,404 00:00:10.188 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.407 [Pipeline] } 00:00:24.426 [Pipeline] // retry 00:00:24.434 [Pipeline] sh 00:00:24.721 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.738 [Pipeline] httpRequest 00:00:25.046 [Pipeline] echo 00:00:25.048 Sorcerer 10.211.164.20 is alive 00:00:25.057 [Pipeline] retry 00:00:25.059 [Pipeline] { 00:00:25.073 [Pipeline] httpRequest 00:00:25.078 HttpMethod: GET 00:00:25.078 URL: http://10.211.164.20/packages/spdk_0ee529aeb4d691e9d62b037c70233bdac615ea03.tar.gz 00:00:25.079 Sending request to url: http://10.211.164.20/packages/spdk_0ee529aeb4d691e9d62b037c70233bdac615ea03.tar.gz 00:00:25.089 Response Code: HTTP/1.1 200 OK 00:00:25.090 Success: Status code 200 is in the accepted range: 200,404 00:00:25.090 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_0ee529aeb4d691e9d62b037c70233bdac615ea03.tar.gz 00:01:27.704 [Pipeline] } 00:01:27.725 [Pipeline] // retry 00:01:27.732 [Pipeline] sh 00:01:28.027 + tar --no-same-owner -xf spdk_0ee529aeb4d691e9d62b037c70233bdac615ea03.tar.gz 00:01:31.344 [Pipeline] sh 00:01:31.638 + git -C spdk log --oneline -n5 00:01:31.638 0ee529aeb lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:01:31.638 85bc1e85a lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:01:31.638 bb633fc85 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:01:31.638 4985835f7 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:01:31.638 b4d3c8f7d lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:01:31.649 [Pipeline] } 00:01:31.662 [Pipeline] // stage 00:01:31.671 [Pipeline] stage 00:01:31.673 [Pipeline] { (Prepare) 00:01:31.689 [Pipeline] writeFile 00:01:31.703 [Pipeline] sh 00:01:31.989 + logger -p user.info -t JENKINS-CI 00:01:32.003 [Pipeline] sh 00:01:32.294 + logger -p user.info -t JENKINS-CI 00:01:32.306 [Pipeline] sh 00:01:32.592 + cat autorun-spdk.conf 00:01:32.592 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.592 SPDK_TEST_NVMF=1 00:01:32.592 SPDK_TEST_NVME_CLI=1 00:01:32.592 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.592 SPDK_TEST_NVMF_NICS=e810 00:01:32.592 SPDK_TEST_VFIOUSER=1 00:01:32.592 SPDK_RUN_UBSAN=1 00:01:32.592 NET_TYPE=phy 00:01:32.600 RUN_NIGHTLY=0 00:01:32.613 [Pipeline] readFile 00:01:32.663 [Pipeline] withEnv 00:01:32.665 [Pipeline] { 00:01:32.674 [Pipeline] sh 00:01:32.955 + set -ex 00:01:32.955 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:32.955 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:32.955 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.955 ++ SPDK_TEST_NVMF=1 00:01:32.955 ++ SPDK_TEST_NVME_CLI=1 00:01:32.955 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:32.955 ++ SPDK_TEST_NVMF_NICS=e810 00:01:32.955 ++ SPDK_TEST_VFIOUSER=1 00:01:32.955 ++ SPDK_RUN_UBSAN=1 00:01:32.955 ++ NET_TYPE=phy 00:01:32.955 ++ RUN_NIGHTLY=0 00:01:32.955 + case $SPDK_TEST_NVMF_NICS in 00:01:32.955 + DRIVERS=ice 00:01:32.955 + [[ tcp == \r\d\m\a ]] 00:01:32.955 + [[ -n ice ]] 00:01:32.955 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:32.955 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:32.955 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:32.955 rmmod: ERROR: Module irdma is not currently loaded 00:01:32.955 rmmod: ERROR: Module i40iw is not currently loaded 00:01:32.955 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:32.955 + true 00:01:32.955 + for D in $DRIVERS 00:01:32.955 + sudo modprobe ice 00:01:32.955 + exit 0 00:01:32.964 [Pipeline] } 00:01:32.978 [Pipeline] // withEnv 00:01:32.983 [Pipeline] } 00:01:32.996 [Pipeline] // stage 00:01:33.005 [Pipeline] catchError 00:01:33.007 [Pipeline] { 00:01:33.019 [Pipeline] timeout 00:01:33.019 Timeout set to expire in 1 hr 0 min 00:01:33.021 [Pipeline] { 00:01:33.034 [Pipeline] stage 00:01:33.036 [Pipeline] { (Tests) 00:01:33.050 [Pipeline] sh 00:01:33.337 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.337 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.337 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.337 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:33.337 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:33.337 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:33.337 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:33.337 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:33.337 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:33.337 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:33.337 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:33.337 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:33.337 + source /etc/os-release 00:01:33.337 ++ NAME='Fedora Linux' 00:01:33.337 ++ VERSION='39 (Cloud Edition)' 00:01:33.337 ++ ID=fedora 00:01:33.337 ++ VERSION_ID=39 00:01:33.337 ++ VERSION_CODENAME= 00:01:33.337 ++ PLATFORM_ID=platform:f39 00:01:33.337 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:33.337 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:33.337 ++ LOGO=fedora-logo-icon 00:01:33.337 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:33.337 ++ HOME_URL=https://fedoraproject.org/ 00:01:33.337 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:33.337 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:33.337 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:33.337 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:33.337 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:33.337 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:33.337 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:33.337 ++ SUPPORT_END=2024-11-12 00:01:33.337 ++ VARIANT='Cloud Edition' 00:01:33.337 ++ VARIANT_ID=cloud 00:01:33.337 + uname -a 00:01:33.337 Linux spdk-cyp-12 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:33.337 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:36.637 Hugepages 00:01:36.637 node hugesize free / total 00:01:36.637 node0 1048576kB 0 / 0 00:01:36.637 node0 2048kB 0 / 0 00:01:36.637 node1 1048576kB 0 / 0 00:01:36.637 node1 2048kB 0 / 0 00:01:36.637 00:01:36.637 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.637 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:36.637 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:36.637 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:36.637 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:36.637 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:36.637 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:36.897 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:36.897 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:36.897 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:36.897 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:36.897 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:36.897 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:36.897 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:36.897 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:36.897 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:36.898 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:36.898 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:36.898 + rm -f /tmp/spdk-ld-path 00:01:36.898 + source autorun-spdk.conf 00:01:36.898 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.898 ++ SPDK_TEST_NVMF=1 00:01:36.898 ++ SPDK_TEST_NVME_CLI=1 00:01:36.898 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.898 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.898 ++ SPDK_TEST_VFIOUSER=1 00:01:36.898 ++ SPDK_RUN_UBSAN=1 00:01:36.898 ++ NET_TYPE=phy 00:01:36.898 ++ RUN_NIGHTLY=0 00:01:36.898 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.898 + [[ -n '' ]] 00:01:36.898 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.898 + for M in /var/spdk/build-*-manifest.txt 00:01:36.898 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:36.898 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.898 + for M in /var/spdk/build-*-manifest.txt 00:01:36.898 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.898 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.898 + for M in /var/spdk/build-*-manifest.txt 00:01:36.898 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.898 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.898 ++ uname 00:01:36.898 + [[ Linux == \L\i\n\u\x ]] 00:01:36.898 + sudo dmesg -T 00:01:36.898 + sudo dmesg --clear 00:01:36.898 + dmesg_pid=585978 00:01:36.898 + [[ Fedora Linux == FreeBSD ]] 00:01:36.898 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.898 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.898 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.898 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.898 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.898 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.898 + sudo dmesg -Tw 00:01:36.898 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.898 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.898 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.898 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.898 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.898 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.898 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.898 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.898 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.159 13:05:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.159 13:05:59 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:37.159 13:05:59 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:37.159 13:05:59 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:37.159 13:05:59 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.159 13:05:59 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:37.159 13:05:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:37.159 13:05:59 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:37.159 13:05:59 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:37.159 13:05:59 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:37.159 13:05:59 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:37.159 13:05:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.159 13:05:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.159 13:05:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.159 13:05:59 -- paths/export.sh@5 -- $ export PATH 00:01:37.160 13:05:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:37.160 13:05:59 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:37.160 13:05:59 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:37.160 13:05:59 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733400359.XXXXXX 00:01:37.160 13:05:59 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733400359.JGlZoH 00:01:37.160 13:05:59 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:37.160 13:05:59 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:37.160 13:05:59 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:37.160 13:05:59 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:37.160 13:05:59 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:37.160 13:05:59 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:37.160 13:05:59 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:37.160 13:05:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:37.160 13:05:59 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:37.160 13:05:59 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:37.160 13:05:59 -- pm/common@17 -- $ local monitor 00:01:37.160 13:05:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.160 13:05:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.160 13:05:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.160 13:05:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:37.160 13:05:59 -- pm/common@21 -- $ date +%s 00:01:37.160 13:05:59 -- pm/common@21 -- $ date +%s 00:01:37.160 13:05:59 -- pm/common@25 -- $ sleep 1 00:01:37.160 13:05:59 -- pm/common@21 -- $ date +%s 00:01:37.160 13:05:59 -- pm/common@21 -- $ date +%s 00:01:37.160 13:05:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733400359 00:01:37.160 13:05:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733400359 00:01:37.160 13:05:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733400359 00:01:37.160 13:05:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733400359 00:01:37.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733400359_collect-vmstat.pm.log 00:01:37.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733400359_collect-cpu-load.pm.log 00:01:37.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733400359_collect-cpu-temp.pm.log 00:01:37.160 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733400359_collect-bmc-pm.bmc.pm.log 00:01:38.109 13:06:00 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:38.109 13:06:00 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:38.109 13:06:00 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:38.109 13:06:00 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:38.109 13:06:00 -- spdk/autobuild.sh@16 -- $ date -u 00:01:38.109 Thu Dec 5 12:06:00 PM UTC 2024 00:01:38.109 13:06:00 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:38.421 v25.01-pre-287-g0ee529aeb 00:01:38.421 13:06:00 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:38.421 13:06:00 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:38.421 13:06:00 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:38.421 13:06:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:38.421 13:06:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:38.421 13:06:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.421 ************************************ 00:01:38.421 START TEST ubsan 00:01:38.421 ************************************ 00:01:38.421 13:06:00 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:38.421 using ubsan 00:01:38.421 00:01:38.421 real 0m0.001s 00:01:38.421 user 0m0.000s 00:01:38.421 sys 0m0.000s 00:01:38.421 13:06:00 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:38.421 13:06:00 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:38.421 ************************************ 00:01:38.421 END TEST ubsan 00:01:38.421 ************************************ 00:01:38.421 13:06:00 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:38.421 13:06:00 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:38.421 13:06:00 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:38.421 13:06:00 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:38.421 13:06:00 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:38.421 13:06:00 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:38.421 13:06:00 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:38.421 13:06:00 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:38.421 13:06:00 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:38.421 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:38.421 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:38.998 Using 'verbs' RDMA provider 00:01:54.855 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:07.096 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:07.096 Creating mk/config.mk...done. 00:02:07.096 Creating mk/cc.flags.mk...done. 00:02:07.096 Type 'make' to build. 00:02:07.096 13:06:29 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:07.096 13:06:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:07.096 13:06:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:07.096 13:06:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.096 ************************************ 00:02:07.096 START TEST make 00:02:07.096 ************************************ 00:02:07.096 13:06:29 make -- common/autotest_common.sh@1129 -- $ make -j144 00:02:07.668 make[1]: Nothing to be done for 'all'. 00:02:09.047 The Meson build system 00:02:09.047 Version: 1.5.0 00:02:09.047 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:09.047 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.047 Build type: native build 00:02:09.047 Project name: libvfio-user 00:02:09.047 Project version: 0.0.1 00:02:09.047 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:09.047 C linker for the host machine: cc ld.bfd 2.40-14 00:02:09.047 Host machine cpu family: x86_64 00:02:09.047 Host machine cpu: x86_64 00:02:09.047 Run-time dependency threads found: YES 00:02:09.047 Library dl found: YES 00:02:09.047 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:09.047 Run-time dependency json-c found: YES 0.17 00:02:09.047 Run-time dependency cmocka found: YES 1.1.7 00:02:09.047 Program pytest-3 found: NO 00:02:09.047 Program flake8 found: NO 00:02:09.047 Program misspell-fixer found: NO 00:02:09.047 Program restructuredtext-lint found: NO 00:02:09.047 Program valgrind found: YES (/usr/bin/valgrind) 00:02:09.047 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:09.047 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:09.047 Compiler for C supports arguments -Wwrite-strings: YES 00:02:09.047 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:09.047 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:09.047 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:09.047 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:09.047 Build targets in project: 8 00:02:09.047 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:09.047 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:09.047 00:02:09.047 libvfio-user 0.0.1 00:02:09.047 00:02:09.047 User defined options 00:02:09.047 buildtype : debug 00:02:09.047 default_library: shared 00:02:09.047 libdir : /usr/local/lib 00:02:09.047 00:02:09.047 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:09.047 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:09.305 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:09.305 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:09.305 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:09.305 [4/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:09.305 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:09.305 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:09.305 [7/37] Compiling C object samples/server.p/server.c.o 00:02:09.305 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:09.305 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:09.305 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:09.305 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:09.305 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:09.305 [13/37] Compiling C object samples/null.p/null.c.o 00:02:09.305 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:09.305 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:09.305 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:09.305 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:09.305 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:09.305 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:09.305 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:09.305 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:09.305 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:09.305 [23/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:09.305 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:09.305 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:09.305 [26/37] Compiling C object samples/client.p/client.c.o 00:02:09.305 [27/37] Linking target samples/client 00:02:09.305 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:09.305 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:09.305 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:09.305 [31/37] Linking target test/unit_tests 00:02:09.565 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:09.565 [33/37] Linking target samples/server 00:02:09.565 [34/37] Linking target samples/lspci 00:02:09.565 [35/37] Linking target samples/null 00:02:09.565 [36/37] Linking target samples/gpio-pci-idio-16 00:02:09.565 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:09.565 INFO: autodetecting backend as ninja 00:02:09.565 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:09.565 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:10.137 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:10.137 ninja: no work to do. 00:02:16.728 The Meson build system 00:02:16.728 Version: 1.5.0 00:02:16.728 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:16.728 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:16.728 Build type: native build 00:02:16.728 Program cat found: YES (/usr/bin/cat) 00:02:16.728 Project name: DPDK 00:02:16.728 Project version: 24.03.0 00:02:16.728 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.728 C linker for the host machine: cc ld.bfd 2.40-14 00:02:16.728 Host machine cpu family: x86_64 00:02:16.728 Host machine cpu: x86_64 00:02:16.728 Message: ## Building in Developer Mode ## 00:02:16.728 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:16.728 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:16.728 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:16.728 Program python3 found: YES (/usr/bin/python3) 00:02:16.728 Program cat found: YES (/usr/bin/cat) 00:02:16.728 Compiler for C supports arguments -march=native: YES 00:02:16.728 Checking for size of "void *" : 8 00:02:16.728 Checking for size of "void *" : 8 (cached) 00:02:16.728 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:16.728 Library m found: YES 00:02:16.728 Library numa found: YES 00:02:16.728 Has header "numaif.h" : YES 00:02:16.728 Library fdt found: NO 00:02:16.728 Library execinfo found: NO 00:02:16.728 Has header "execinfo.h" : YES 00:02:16.728 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.728 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:16.728 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:16.728 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:16.728 Run-time dependency openssl found: YES 3.1.1 00:02:16.728 Run-time dependency libpcap found: YES 1.10.4 00:02:16.728 Has header "pcap.h" with dependency libpcap: YES 00:02:16.728 Compiler for C supports arguments -Wcast-qual: YES 00:02:16.728 Compiler for C supports arguments -Wdeprecated: YES 00:02:16.728 Compiler for C supports arguments -Wformat: YES 00:02:16.728 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:16.728 Compiler for C supports arguments -Wformat-security: NO 00:02:16.728 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.728 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:16.728 Compiler for C supports arguments -Wnested-externs: YES 00:02:16.728 Compiler for C supports arguments -Wold-style-definition: YES 00:02:16.728 Compiler for C supports arguments -Wpointer-arith: YES 00:02:16.728 Compiler for C supports arguments -Wsign-compare: YES 00:02:16.728 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:16.728 Compiler for C supports arguments -Wundef: YES 00:02:16.728 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.728 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:16.728 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:16.728 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.728 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:16.728 Program objdump found: YES (/usr/bin/objdump) 00:02:16.728 Compiler for C supports arguments -mavx512f: YES 00:02:16.729 Checking if "AVX512 checking" compiles: YES 00:02:16.729 Fetching value of define "__SSE4_2__" : 1 00:02:16.729 Fetching value of define "__AES__" : 1 00:02:16.729 Fetching value of define "__AVX__" : 1 00:02:16.729 Fetching value of define "__AVX2__" : 1 00:02:16.729 Fetching value of define "__AVX512BW__" : 1 00:02:16.729 Fetching value of define "__AVX512CD__" : 1 00:02:16.729 Fetching value of define "__AVX512DQ__" : 1 00:02:16.729 Fetching value of define "__AVX512F__" : 1 00:02:16.729 Fetching value of define "__AVX512VL__" : 1 00:02:16.729 Fetching value of define "__PCLMUL__" : 1 00:02:16.729 Fetching value of define "__RDRND__" : 1 00:02:16.729 Fetching value of define "__RDSEED__" : 1 00:02:16.729 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:16.729 Fetching value of define "__znver1__" : (undefined) 00:02:16.729 Fetching value of define "__znver2__" : (undefined) 00:02:16.729 Fetching value of define "__znver3__" : (undefined) 00:02:16.729 Fetching value of define "__znver4__" : (undefined) 00:02:16.729 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:16.729 Message: lib/log: Defining dependency "log" 00:02:16.729 Message: lib/kvargs: Defining dependency "kvargs" 00:02:16.729 Message: lib/telemetry: Defining dependency "telemetry" 00:02:16.729 Checking for function "getentropy" : NO 00:02:16.729 Message: lib/eal: Defining dependency "eal" 00:02:16.729 Message: lib/ring: Defining dependency "ring" 00:02:16.729 Message: lib/rcu: Defining dependency "rcu" 00:02:16.729 Message: lib/mempool: Defining dependency "mempool" 00:02:16.729 Message: lib/mbuf: Defining dependency "mbuf" 00:02:16.729 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:16.729 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:16.729 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:16.729 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:16.729 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:16.729 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:16.729 Compiler for C supports arguments -mpclmul: YES 00:02:16.729 Compiler for C supports arguments -maes: YES 00:02:16.729 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.729 Compiler for C supports arguments -mavx512bw: YES 00:02:16.729 Compiler for C supports arguments -mavx512dq: YES 00:02:16.729 Compiler for C supports arguments -mavx512vl: YES 00:02:16.729 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:16.729 Compiler for C supports arguments -mavx2: YES 00:02:16.729 Compiler for C supports arguments -mavx: YES 00:02:16.729 Message: lib/net: Defining dependency "net" 00:02:16.729 Message: lib/meter: Defining dependency "meter" 00:02:16.729 Message: lib/ethdev: Defining dependency "ethdev" 00:02:16.729 Message: lib/pci: Defining dependency "pci" 00:02:16.729 Message: lib/cmdline: Defining dependency "cmdline" 00:02:16.729 Message: lib/hash: Defining dependency "hash" 00:02:16.729 Message: lib/timer: Defining dependency "timer" 00:02:16.729 Message: lib/compressdev: Defining dependency "compressdev" 00:02:16.729 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:16.729 Message: lib/dmadev: Defining dependency "dmadev" 00:02:16.729 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:16.729 Message: lib/power: Defining dependency "power" 00:02:16.729 Message: lib/reorder: Defining dependency "reorder" 00:02:16.729 Message: lib/security: Defining dependency "security" 00:02:16.729 Has header "linux/userfaultfd.h" : YES 00:02:16.729 Has header "linux/vduse.h" : YES 00:02:16.729 Message: lib/vhost: Defining dependency "vhost" 00:02:16.729 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:16.729 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:16.729 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:16.729 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:16.729 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:16.729 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:16.729 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:16.729 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:16.729 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:16.729 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:16.729 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:16.729 Configuring doxy-api-html.conf using configuration 00:02:16.729 Configuring doxy-api-man.conf using configuration 00:02:16.729 Program mandb found: YES (/usr/bin/mandb) 00:02:16.729 Program sphinx-build found: NO 00:02:16.729 Configuring rte_build_config.h using configuration 00:02:16.729 Message: 00:02:16.729 ================= 00:02:16.729 Applications Enabled 00:02:16.729 ================= 00:02:16.729 00:02:16.729 apps: 00:02:16.729 00:02:16.729 00:02:16.729 Message: 00:02:16.729 ================= 00:02:16.729 Libraries Enabled 00:02:16.729 ================= 00:02:16.729 00:02:16.729 libs: 00:02:16.729 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:16.729 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:16.729 cryptodev, dmadev, power, reorder, security, vhost, 00:02:16.729 00:02:16.729 Message: 00:02:16.729 =============== 00:02:16.729 Drivers Enabled 00:02:16.729 =============== 00:02:16.729 00:02:16.729 common: 00:02:16.729 00:02:16.729 bus: 00:02:16.729 pci, vdev, 00:02:16.729 mempool: 00:02:16.729 ring, 00:02:16.729 dma: 00:02:16.729 00:02:16.729 net: 00:02:16.729 00:02:16.729 crypto: 00:02:16.729 00:02:16.729 compress: 00:02:16.729 00:02:16.729 vdpa: 00:02:16.729 00:02:16.729 00:02:16.729 Message: 00:02:16.729 ================= 00:02:16.729 Content Skipped 00:02:16.729 ================= 00:02:16.729 00:02:16.729 apps: 00:02:16.729 dumpcap: explicitly disabled via build config 00:02:16.729 graph: explicitly disabled via build config 00:02:16.729 pdump: explicitly disabled via build config 00:02:16.729 proc-info: explicitly disabled via build config 00:02:16.729 test-acl: explicitly disabled via build config 00:02:16.729 test-bbdev: explicitly disabled via build config 00:02:16.729 test-cmdline: explicitly disabled via build config 00:02:16.729 test-compress-perf: explicitly disabled via build config 00:02:16.729 test-crypto-perf: explicitly disabled via build config 00:02:16.729 test-dma-perf: explicitly disabled via build config 00:02:16.729 test-eventdev: explicitly disabled via build config 00:02:16.729 test-fib: explicitly disabled via build config 00:02:16.729 test-flow-perf: explicitly disabled via build config 00:02:16.729 test-gpudev: explicitly disabled via build config 00:02:16.729 test-mldev: explicitly disabled via build config 00:02:16.729 test-pipeline: explicitly disabled via build config 00:02:16.729 test-pmd: explicitly disabled via build config 00:02:16.729 test-regex: explicitly disabled via build config 00:02:16.729 test-sad: explicitly disabled via build config 00:02:16.729 test-security-perf: explicitly disabled via build config 00:02:16.729 00:02:16.729 libs: 00:02:16.729 argparse: explicitly disabled via build config 00:02:16.729 metrics: explicitly disabled via build config 00:02:16.729 acl: explicitly disabled via build config 00:02:16.729 bbdev: explicitly disabled via build config 00:02:16.729 bitratestats: explicitly disabled via build config 00:02:16.729 bpf: explicitly disabled via build config 00:02:16.729 cfgfile: explicitly disabled via build config 00:02:16.729 distributor: explicitly disabled via build config 00:02:16.729 efd: explicitly disabled via build config 00:02:16.729 eventdev: explicitly disabled via build config 00:02:16.729 dispatcher: explicitly disabled via build config 00:02:16.729 gpudev: explicitly disabled via build config 00:02:16.729 gro: explicitly disabled via build config 00:02:16.729 gso: explicitly disabled via build config 00:02:16.729 ip_frag: explicitly disabled via build config 00:02:16.729 jobstats: explicitly disabled via build config 00:02:16.729 latencystats: explicitly disabled via build config 00:02:16.729 lpm: explicitly disabled via build config 00:02:16.729 member: explicitly disabled via build config 00:02:16.729 pcapng: explicitly disabled via build config 00:02:16.729 rawdev: explicitly disabled via build config 00:02:16.729 regexdev: explicitly disabled via build config 00:02:16.729 mldev: explicitly disabled via build config 00:02:16.729 rib: explicitly disabled via build config 00:02:16.729 sched: explicitly disabled via build config 00:02:16.729 stack: explicitly disabled via build config 00:02:16.729 ipsec: explicitly disabled via build config 00:02:16.729 pdcp: explicitly disabled via build config 00:02:16.729 fib: explicitly disabled via build config 00:02:16.729 port: explicitly disabled via build config 00:02:16.729 pdump: explicitly disabled via build config 00:02:16.729 table: explicitly disabled via build config 00:02:16.729 pipeline: explicitly disabled via build config 00:02:16.729 graph: explicitly disabled via build config 00:02:16.729 node: explicitly disabled via build config 00:02:16.729 00:02:16.729 drivers: 00:02:16.729 common/cpt: not in enabled drivers build config 00:02:16.730 common/dpaax: not in enabled drivers build config 00:02:16.730 common/iavf: not in enabled drivers build config 00:02:16.730 common/idpf: not in enabled drivers build config 00:02:16.730 common/ionic: not in enabled drivers build config 00:02:16.730 common/mvep: not in enabled drivers build config 00:02:16.730 common/octeontx: not in enabled drivers build config 00:02:16.730 bus/auxiliary: not in enabled drivers build config 00:02:16.730 bus/cdx: not in enabled drivers build config 00:02:16.730 bus/dpaa: not in enabled drivers build config 00:02:16.730 bus/fslmc: not in enabled drivers build config 00:02:16.730 bus/ifpga: not in enabled drivers build config 00:02:16.730 bus/platform: not in enabled drivers build config 00:02:16.730 bus/uacce: not in enabled drivers build config 00:02:16.730 bus/vmbus: not in enabled drivers build config 00:02:16.730 common/cnxk: not in enabled drivers build config 00:02:16.730 common/mlx5: not in enabled drivers build config 00:02:16.730 common/nfp: not in enabled drivers build config 00:02:16.730 common/nitrox: not in enabled drivers build config 00:02:16.730 common/qat: not in enabled drivers build config 00:02:16.730 common/sfc_efx: not in enabled drivers build config 00:02:16.730 mempool/bucket: not in enabled drivers build config 00:02:16.730 mempool/cnxk: not in enabled drivers build config 00:02:16.730 mempool/dpaa: not in enabled drivers build config 00:02:16.730 mempool/dpaa2: not in enabled drivers build config 00:02:16.730 mempool/octeontx: not in enabled drivers build config 00:02:16.730 mempool/stack: not in enabled drivers build config 00:02:16.730 dma/cnxk: not in enabled drivers build config 00:02:16.730 dma/dpaa: not in enabled drivers build config 00:02:16.730 dma/dpaa2: not in enabled drivers build config 00:02:16.730 dma/hisilicon: not in enabled drivers build config 00:02:16.730 dma/idxd: not in enabled drivers build config 00:02:16.730 dma/ioat: not in enabled drivers build config 00:02:16.730 dma/skeleton: not in enabled drivers build config 00:02:16.730 net/af_packet: not in enabled drivers build config 00:02:16.730 net/af_xdp: not in enabled drivers build config 00:02:16.730 net/ark: not in enabled drivers build config 00:02:16.730 net/atlantic: not in enabled drivers build config 00:02:16.730 net/avp: not in enabled drivers build config 00:02:16.730 net/axgbe: not in enabled drivers build config 00:02:16.730 net/bnx2x: not in enabled drivers build config 00:02:16.730 net/bnxt: not in enabled drivers build config 00:02:16.730 net/bonding: not in enabled drivers build config 00:02:16.730 net/cnxk: not in enabled drivers build config 00:02:16.730 net/cpfl: not in enabled drivers build config 00:02:16.730 net/cxgbe: not in enabled drivers build config 00:02:16.730 net/dpaa: not in enabled drivers build config 00:02:16.730 net/dpaa2: not in enabled drivers build config 00:02:16.730 net/e1000: not in enabled drivers build config 00:02:16.730 net/ena: not in enabled drivers build config 00:02:16.730 net/enetc: not in enabled drivers build config 00:02:16.730 net/enetfec: not in enabled drivers build config 00:02:16.730 net/enic: not in enabled drivers build config 00:02:16.730 net/failsafe: not in enabled drivers build config 00:02:16.730 net/fm10k: not in enabled drivers build config 00:02:16.730 net/gve: not in enabled drivers build config 00:02:16.730 net/hinic: not in enabled drivers build config 00:02:16.730 net/hns3: not in enabled drivers build config 00:02:16.730 net/i40e: not in enabled drivers build config 00:02:16.730 net/iavf: not in enabled drivers build config 00:02:16.730 net/ice: not in enabled drivers build config 00:02:16.730 net/idpf: not in enabled drivers build config 00:02:16.730 net/igc: not in enabled drivers build config 00:02:16.730 net/ionic: not in enabled drivers build config 00:02:16.730 net/ipn3ke: not in enabled drivers build config 00:02:16.730 net/ixgbe: not in enabled drivers build config 00:02:16.730 net/mana: not in enabled drivers build config 00:02:16.730 net/memif: not in enabled drivers build config 00:02:16.730 net/mlx4: not in enabled drivers build config 00:02:16.730 net/mlx5: not in enabled drivers build config 00:02:16.730 net/mvneta: not in enabled drivers build config 00:02:16.730 net/mvpp2: not in enabled drivers build config 00:02:16.730 net/netvsc: not in enabled drivers build config 00:02:16.730 net/nfb: not in enabled drivers build config 00:02:16.730 net/nfp: not in enabled drivers build config 00:02:16.730 net/ngbe: not in enabled drivers build config 00:02:16.730 net/null: not in enabled drivers build config 00:02:16.730 net/octeontx: not in enabled drivers build config 00:02:16.730 net/octeon_ep: not in enabled drivers build config 00:02:16.730 net/pcap: not in enabled drivers build config 00:02:16.730 net/pfe: not in enabled drivers build config 00:02:16.730 net/qede: not in enabled drivers build config 00:02:16.730 net/ring: not in enabled drivers build config 00:02:16.730 net/sfc: not in enabled drivers build config 00:02:16.730 net/softnic: not in enabled drivers build config 00:02:16.730 net/tap: not in enabled drivers build config 00:02:16.730 net/thunderx: not in enabled drivers build config 00:02:16.730 net/txgbe: not in enabled drivers build config 00:02:16.730 net/vdev_netvsc: not in enabled drivers build config 00:02:16.730 net/vhost: not in enabled drivers build config 00:02:16.730 net/virtio: not in enabled drivers build config 00:02:16.730 net/vmxnet3: not in enabled drivers build config 00:02:16.730 raw/*: missing internal dependency, "rawdev" 00:02:16.730 crypto/armv8: not in enabled drivers build config 00:02:16.730 crypto/bcmfs: not in enabled drivers build config 00:02:16.730 crypto/caam_jr: not in enabled drivers build config 00:02:16.730 crypto/ccp: not in enabled drivers build config 00:02:16.730 crypto/cnxk: not in enabled drivers build config 00:02:16.730 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.730 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.730 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.730 crypto/mlx5: not in enabled drivers build config 00:02:16.730 crypto/mvsam: not in enabled drivers build config 00:02:16.730 crypto/nitrox: not in enabled drivers build config 00:02:16.730 crypto/null: not in enabled drivers build config 00:02:16.730 crypto/octeontx: not in enabled drivers build config 00:02:16.730 crypto/openssl: not in enabled drivers build config 00:02:16.730 crypto/scheduler: not in enabled drivers build config 00:02:16.730 crypto/uadk: not in enabled drivers build config 00:02:16.730 crypto/virtio: not in enabled drivers build config 00:02:16.730 compress/isal: not in enabled drivers build config 00:02:16.730 compress/mlx5: not in enabled drivers build config 00:02:16.730 compress/nitrox: not in enabled drivers build config 00:02:16.730 compress/octeontx: not in enabled drivers build config 00:02:16.730 compress/zlib: not in enabled drivers build config 00:02:16.730 regex/*: missing internal dependency, "regexdev" 00:02:16.730 ml/*: missing internal dependency, "mldev" 00:02:16.730 vdpa/ifc: not in enabled drivers build config 00:02:16.730 vdpa/mlx5: not in enabled drivers build config 00:02:16.730 vdpa/nfp: not in enabled drivers build config 00:02:16.730 vdpa/sfc: not in enabled drivers build config 00:02:16.730 event/*: missing internal dependency, "eventdev" 00:02:16.730 baseband/*: missing internal dependency, "bbdev" 00:02:16.730 gpu/*: missing internal dependency, "gpudev" 00:02:16.730 00:02:16.730 00:02:16.730 Build targets in project: 84 00:02:16.730 00:02:16.730 DPDK 24.03.0 00:02:16.730 00:02:16.730 User defined options 00:02:16.730 buildtype : debug 00:02:16.730 default_library : shared 00:02:16.730 libdir : lib 00:02:16.730 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:16.730 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:16.730 c_link_args : 00:02:16.730 cpu_instruction_set: native 00:02:16.730 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:16.730 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:16.730 enable_docs : false 00:02:16.730 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:16.730 enable_kmods : false 00:02:16.731 max_lcores : 128 00:02:16.731 tests : false 00:02:16.731 00:02:16.731 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.731 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:16.731 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.731 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.731 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:16.731 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.731 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:16.731 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:16.731 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.731 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.731 [9/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.731 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.731 [11/267] Linking static target lib/librte_kvargs.a 00:02:16.731 [12/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:16.731 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.731 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.731 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.731 [16/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.731 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.731 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.731 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.731 [20/267] Linking static target lib/librte_log.a 00:02:16.731 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.731 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.731 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.731 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.731 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:16.731 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.731 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.731 [28/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.731 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.731 [30/267] Linking static target lib/librte_pci.a 00:02:16.731 [31/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.731 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.731 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.731 [34/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.731 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.989 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.989 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.989 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.989 [39/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.989 [40/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.989 [41/267] Linking static target lib/librte_ring.a 00:02:16.989 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.989 [43/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.989 [44/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.989 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.989 [46/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.989 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.989 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.989 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.989 [50/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.989 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.989 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.989 [53/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.249 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.249 [55/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:17.249 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.249 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.249 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.249 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:17.249 [60/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:17.249 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:17.249 [62/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.249 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.249 [64/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:17.249 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:17.249 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:17.249 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.249 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:17.249 [69/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:17.249 [70/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.249 [71/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.249 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:17.249 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:17.249 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.249 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:17.249 [76/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:17.249 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:17.249 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.249 [79/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:17.249 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:17.249 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:17.249 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:17.249 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.249 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:17.249 [85/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.249 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.249 [87/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:17.249 [88/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:17.249 [89/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.249 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.249 [91/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:17.249 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.249 [93/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:17.249 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:17.249 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.249 [96/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:17.249 [97/267] Linking static target lib/librte_timer.a 00:02:17.249 [98/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:17.249 [99/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.249 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.249 [101/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:17.249 [102/267] Linking static target lib/librte_meter.a 00:02:17.249 [103/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:17.249 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.249 [105/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:17.249 [106/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:17.249 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.249 [108/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.249 [109/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:17.249 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.249 [111/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:17.249 [112/267] Linking static target lib/librte_telemetry.a 00:02:17.249 [113/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:17.249 [114/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:17.249 [115/267] Linking static target lib/librte_cmdline.a 00:02:17.249 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:17.249 [117/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.249 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:17.249 [119/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.249 [120/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:17.249 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:17.249 [122/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:17.249 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:17.249 [124/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:17.249 [125/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.249 [126/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:17.249 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:17.249 [128/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.249 [129/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.249 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.249 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:17.249 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.249 [133/267] Linking static target lib/librte_net.a 00:02:17.249 [134/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:17.249 [135/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.249 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:17.249 [137/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.249 [138/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:17.249 [139/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:17.249 [140/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.249 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.249 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:17.249 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.249 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:17.249 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:17.249 [146/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:17.249 [147/267] Linking target lib/librte_log.so.24.1 00:02:17.249 [148/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:17.249 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:17.249 [150/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:17.249 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:17.249 [152/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:17.249 [153/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.249 [154/267] Linking static target lib/librte_power.a 00:02:17.249 [155/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.249 [156/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.249 [157/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.249 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:17.249 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:17.249 [160/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:17.249 [161/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:17.249 [162/267] Linking static target lib/librte_rcu.a 00:02:17.249 [163/267] Linking static target lib/librte_compressdev.a 00:02:17.249 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:17.249 [165/267] Linking static target lib/librte_mempool.a 00:02:17.249 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.249 [167/267] Linking static target lib/librte_reorder.a 00:02:17.249 [168/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:17.250 [169/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.250 [170/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.510 [171/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:17.510 [172/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.510 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:17.510 [174/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:17.510 [175/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:17.510 [176/267] Linking static target drivers/librte_bus_vdev.a 00:02:17.510 [177/267] Linking static target lib/librte_dmadev.a 00:02:17.510 [178/267] Linking static target lib/librte_eal.a 00:02:17.510 [179/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:17.510 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:17.510 [181/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.510 [182/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.510 [183/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.510 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.510 [185/267] Linking static target lib/librte_security.a 00:02:17.510 [186/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:17.510 [187/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.510 [188/267] Linking target lib/librte_kvargs.so.24.1 00:02:17.510 [189/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.510 [190/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.510 [191/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.510 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.511 [193/267] Linking static target lib/librte_mbuf.a 00:02:17.511 [194/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.511 [195/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.511 [196/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.511 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.511 [198/267] Linking static target lib/librte_hash.a 00:02:17.511 [199/267] Linking static target drivers/librte_mempool_ring.a 00:02:17.511 [200/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.770 [201/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:17.770 [202/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.770 [203/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.770 [204/267] Linking static target drivers/librte_bus_pci.a 00:02:17.770 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.770 [206/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.770 [207/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.770 [208/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.770 [209/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.770 [210/267] Linking static target lib/librte_cryptodev.a 00:02:17.770 [211/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.770 [212/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.770 [213/267] Linking target lib/librte_telemetry.so.24.1 00:02:18.031 [214/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.031 [215/267] Linking static target lib/librte_ethdev.a 00:02:18.031 [216/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:18.031 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.031 [218/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.031 [219/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.031 [220/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.293 [221/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.293 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.554 [223/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.554 [224/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.554 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.554 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.492 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:19.492 [228/267] Linking static target lib/librte_vhost.a 00:02:20.063 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.447 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.017 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.584 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.584 [233/267] Linking target lib/librte_eal.so.24.1 00:02:28.843 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:28.843 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:28.843 [236/267] Linking target lib/librte_ring.so.24.1 00:02:28.843 [237/267] Linking target lib/librte_meter.so.24.1 00:02:28.843 [238/267] Linking target lib/librte_timer.so.24.1 00:02:28.843 [239/267] Linking target lib/librte_pci.so.24.1 00:02:28.843 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:29.103 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:29.103 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:29.103 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:29.103 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:29.103 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:29.103 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:29.103 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:29.103 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:29.103 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:29.103 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:29.103 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:29.103 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:29.363 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:29.363 [254/267] Linking target lib/librte_net.so.24.1 00:02:29.363 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:29.363 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:29.363 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:29.363 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:29.623 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:29.623 [260/267] Linking target lib/librte_hash.so.24.1 00:02:29.623 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:29.623 [262/267] Linking target lib/librte_cmdline.so.24.1 00:02:29.623 [263/267] Linking target lib/librte_security.so.24.1 00:02:29.623 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:29.623 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:29.623 [266/267] Linking target lib/librte_power.so.24.1 00:02:29.623 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:29.623 INFO: autodetecting backend as ninja 00:02:29.623 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:33.825 CC lib/log/log.o 00:02:33.825 CC lib/log/log_flags.o 00:02:33.825 CC lib/ut_mock/mock.o 00:02:33.825 CC lib/log/log_deprecated.o 00:02:33.825 CC lib/ut/ut.o 00:02:33.825 LIB libspdk_ut_mock.a 00:02:33.825 LIB libspdk_ut.a 00:02:33.825 LIB libspdk_log.a 00:02:33.825 SO libspdk_ut_mock.so.6.0 00:02:33.825 SO libspdk_ut.so.2.0 00:02:33.825 SO libspdk_log.so.7.1 00:02:33.825 SYMLINK libspdk_ut_mock.so 00:02:33.825 SYMLINK libspdk_ut.so 00:02:33.825 SYMLINK libspdk_log.so 00:02:34.087 CXX lib/trace_parser/trace.o 00:02:34.348 CC lib/util/base64.o 00:02:34.348 CC lib/ioat/ioat.o 00:02:34.348 CC lib/util/bit_array.o 00:02:34.348 CC lib/util/cpuset.o 00:02:34.348 CC lib/util/crc16.o 00:02:34.348 CC lib/dma/dma.o 00:02:34.348 CC lib/util/crc32.o 00:02:34.348 CC lib/util/crc32c.o 00:02:34.348 CC lib/util/crc32_ieee.o 00:02:34.348 CC lib/util/crc64.o 00:02:34.348 CC lib/util/dif.o 00:02:34.348 CC lib/util/fd.o 00:02:34.348 CC lib/util/fd_group.o 00:02:34.348 CC lib/util/file.o 00:02:34.348 CC lib/util/hexlify.o 00:02:34.348 CC lib/util/iov.o 00:02:34.348 CC lib/util/math.o 00:02:34.348 CC lib/util/net.o 00:02:34.348 CC lib/util/pipe.o 00:02:34.348 CC lib/util/strerror_tls.o 00:02:34.348 CC lib/util/string.o 00:02:34.348 CC lib/util/uuid.o 00:02:34.348 CC lib/util/xor.o 00:02:34.348 CC lib/util/md5.o 00:02:34.348 CC lib/util/zipf.o 00:02:34.348 CC lib/vfio_user/host/vfio_user.o 00:02:34.348 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.348 LIB libspdk_dma.a 00:02:34.348 SO libspdk_dma.so.5.0 00:02:34.647 LIB libspdk_ioat.a 00:02:34.647 SYMLINK libspdk_dma.so 00:02:34.647 SO libspdk_ioat.so.7.0 00:02:34.647 LIB libspdk_vfio_user.a 00:02:34.647 SYMLINK libspdk_ioat.so 00:02:34.647 SO libspdk_vfio_user.so.5.0 00:02:34.647 SYMLINK libspdk_vfio_user.so 00:02:34.647 LIB libspdk_util.a 00:02:34.909 SO libspdk_util.so.10.1 00:02:34.909 SYMLINK libspdk_util.so 00:02:35.171 LIB libspdk_trace_parser.a 00:02:35.171 SO libspdk_trace_parser.so.6.0 00:02:35.171 SYMLINK libspdk_trace_parser.so 00:02:35.432 CC lib/rdma_utils/rdma_utils.o 00:02:35.432 CC lib/vmd/vmd.o 00:02:35.432 CC lib/vmd/led.o 00:02:35.432 CC lib/idxd/idxd.o 00:02:35.432 CC lib/idxd/idxd_user.o 00:02:35.432 CC lib/json/json_parse.o 00:02:35.432 CC lib/json/json_write.o 00:02:35.432 CC lib/idxd/idxd_kernel.o 00:02:35.432 CC lib/conf/conf.o 00:02:35.432 CC lib/json/json_util.o 00:02:35.432 CC lib/env_dpdk/env.o 00:02:35.432 CC lib/env_dpdk/memory.o 00:02:35.432 CC lib/env_dpdk/pci.o 00:02:35.432 CC lib/env_dpdk/init.o 00:02:35.432 CC lib/env_dpdk/threads.o 00:02:35.432 CC lib/env_dpdk/pci_ioat.o 00:02:35.432 CC lib/env_dpdk/pci_virtio.o 00:02:35.432 CC lib/env_dpdk/pci_vmd.o 00:02:35.432 CC lib/env_dpdk/pci_idxd.o 00:02:35.432 CC lib/env_dpdk/pci_event.o 00:02:35.432 CC lib/env_dpdk/sigbus_handler.o 00:02:35.432 CC lib/env_dpdk/pci_dpdk.o 00:02:35.432 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:35.432 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:35.694 LIB libspdk_conf.a 00:02:35.694 LIB libspdk_json.a 00:02:35.694 LIB libspdk_rdma_utils.a 00:02:35.694 SO libspdk_conf.so.6.0 00:02:35.694 SO libspdk_json.so.6.0 00:02:35.694 SO libspdk_rdma_utils.so.1.0 00:02:35.694 SYMLINK libspdk_conf.so 00:02:35.694 SYMLINK libspdk_json.so 00:02:35.694 SYMLINK libspdk_rdma_utils.so 00:02:35.957 LIB libspdk_idxd.a 00:02:35.957 LIB libspdk_vmd.a 00:02:35.957 SO libspdk_idxd.so.12.1 00:02:35.957 SO libspdk_vmd.so.6.0 00:02:35.957 SYMLINK libspdk_idxd.so 00:02:35.957 SYMLINK libspdk_vmd.so 00:02:35.957 CC lib/jsonrpc/jsonrpc_server.o 00:02:35.957 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:35.957 CC lib/jsonrpc/jsonrpc_client.o 00:02:35.957 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:35.957 CC lib/rdma_provider/common.o 00:02:35.957 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:36.220 LIB libspdk_rdma_provider.a 00:02:36.220 LIB libspdk_jsonrpc.a 00:02:36.220 SO libspdk_rdma_provider.so.7.0 00:02:36.220 SO libspdk_jsonrpc.so.6.0 00:02:36.481 SYMLINK libspdk_rdma_provider.so 00:02:36.481 SYMLINK libspdk_jsonrpc.so 00:02:36.481 LIB libspdk_env_dpdk.a 00:02:36.743 SO libspdk_env_dpdk.so.15.1 00:02:36.743 SYMLINK libspdk_env_dpdk.so 00:02:36.743 CC lib/rpc/rpc.o 00:02:37.004 LIB libspdk_rpc.a 00:02:37.004 SO libspdk_rpc.so.6.0 00:02:37.004 SYMLINK libspdk_rpc.so 00:02:37.577 CC lib/keyring/keyring.o 00:02:37.577 CC lib/keyring/keyring_rpc.o 00:02:37.577 CC lib/trace/trace.o 00:02:37.577 CC lib/trace/trace_flags.o 00:02:37.577 CC lib/trace/trace_rpc.o 00:02:37.577 CC lib/notify/notify.o 00:02:37.577 CC lib/notify/notify_rpc.o 00:02:37.577 LIB libspdk_notify.a 00:02:37.577 SO libspdk_notify.so.6.0 00:02:37.577 LIB libspdk_keyring.a 00:02:37.577 LIB libspdk_trace.a 00:02:37.838 SO libspdk_keyring.so.2.0 00:02:37.838 SO libspdk_trace.so.11.0 00:02:37.838 SYMLINK libspdk_notify.so 00:02:37.838 SYMLINK libspdk_keyring.so 00:02:37.838 SYMLINK libspdk_trace.so 00:02:38.099 CC lib/sock/sock.o 00:02:38.099 CC lib/sock/sock_rpc.o 00:02:38.099 CC lib/thread/thread.o 00:02:38.099 CC lib/thread/iobuf.o 00:02:38.694 LIB libspdk_sock.a 00:02:38.694 SO libspdk_sock.so.10.0 00:02:38.694 SYMLINK libspdk_sock.so 00:02:38.954 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:38.955 CC lib/nvme/nvme_ctrlr.o 00:02:38.955 CC lib/nvme/nvme_fabric.o 00:02:38.955 CC lib/nvme/nvme_ns_cmd.o 00:02:38.955 CC lib/nvme/nvme_ns.o 00:02:38.955 CC lib/nvme/nvme_pcie_common.o 00:02:38.955 CC lib/nvme/nvme_pcie.o 00:02:38.955 CC lib/nvme/nvme_qpair.o 00:02:38.955 CC lib/nvme/nvme.o 00:02:38.955 CC lib/nvme/nvme_quirks.o 00:02:38.955 CC lib/nvme/nvme_transport.o 00:02:38.955 CC lib/nvme/nvme_discovery.o 00:02:38.955 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:38.955 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:38.955 CC lib/nvme/nvme_tcp.o 00:02:38.955 CC lib/nvme/nvme_opal.o 00:02:38.955 CC lib/nvme/nvme_io_msg.o 00:02:38.955 CC lib/nvme/nvme_poll_group.o 00:02:38.955 CC lib/nvme/nvme_auth.o 00:02:38.955 CC lib/nvme/nvme_zns.o 00:02:38.955 CC lib/nvme/nvme_stubs.o 00:02:38.955 CC lib/nvme/nvme_cuse.o 00:02:38.955 CC lib/nvme/nvme_vfio_user.o 00:02:38.955 CC lib/nvme/nvme_rdma.o 00:02:39.525 LIB libspdk_thread.a 00:02:39.525 SO libspdk_thread.so.11.0 00:02:39.525 SYMLINK libspdk_thread.so 00:02:40.099 CC lib/blob/request.o 00:02:40.099 CC lib/blob/blobstore.o 00:02:40.099 CC lib/blob/zeroes.o 00:02:40.099 CC lib/blob/blob_bs_dev.o 00:02:40.099 CC lib/virtio/virtio.o 00:02:40.099 CC lib/virtio/virtio_vhost_user.o 00:02:40.099 CC lib/virtio/virtio_vfio_user.o 00:02:40.099 CC lib/virtio/virtio_pci.o 00:02:40.099 CC lib/accel/accel_rpc.o 00:02:40.099 CC lib/init/json_config.o 00:02:40.099 CC lib/init/subsystem_rpc.o 00:02:40.099 CC lib/init/subsystem.o 00:02:40.099 CC lib/accel/accel.o 00:02:40.099 CC lib/accel/accel_sw.o 00:02:40.099 CC lib/init/rpc.o 00:02:40.099 CC lib/vfu_tgt/tgt_endpoint.o 00:02:40.099 CC lib/vfu_tgt/tgt_rpc.o 00:02:40.099 CC lib/fsdev/fsdev.o 00:02:40.099 CC lib/fsdev/fsdev_io.o 00:02:40.099 CC lib/fsdev/fsdev_rpc.o 00:02:40.361 LIB libspdk_init.a 00:02:40.361 SO libspdk_init.so.6.0 00:02:40.361 LIB libspdk_virtio.a 00:02:40.361 LIB libspdk_vfu_tgt.a 00:02:40.361 SO libspdk_virtio.so.7.0 00:02:40.361 SO libspdk_vfu_tgt.so.3.0 00:02:40.361 SYMLINK libspdk_init.so 00:02:40.361 SYMLINK libspdk_virtio.so 00:02:40.361 SYMLINK libspdk_vfu_tgt.so 00:02:40.623 LIB libspdk_fsdev.a 00:02:40.623 SO libspdk_fsdev.so.2.0 00:02:40.623 SYMLINK libspdk_fsdev.so 00:02:40.623 CC lib/event/app.o 00:02:40.623 CC lib/event/reactor.o 00:02:40.623 CC lib/event/log_rpc.o 00:02:40.623 CC lib/event/app_rpc.o 00:02:40.623 CC lib/event/scheduler_static.o 00:02:40.885 LIB libspdk_accel.a 00:02:40.885 LIB libspdk_nvme.a 00:02:40.885 SO libspdk_accel.so.16.0 00:02:41.147 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:41.147 SYMLINK libspdk_accel.so 00:02:41.147 SO libspdk_nvme.so.15.0 00:02:41.147 LIB libspdk_event.a 00:02:41.147 SO libspdk_event.so.14.0 00:02:41.147 SYMLINK libspdk_event.so 00:02:41.408 SYMLINK libspdk_nvme.so 00:02:41.408 CC lib/bdev/bdev.o 00:02:41.408 CC lib/bdev/bdev_rpc.o 00:02:41.408 CC lib/bdev/bdev_zone.o 00:02:41.408 CC lib/bdev/part.o 00:02:41.408 CC lib/bdev/scsi_nvme.o 00:02:41.670 LIB libspdk_fuse_dispatcher.a 00:02:41.670 SO libspdk_fuse_dispatcher.so.1.0 00:02:41.670 SYMLINK libspdk_fuse_dispatcher.so 00:02:42.613 LIB libspdk_blob.a 00:02:42.613 SO libspdk_blob.so.12.0 00:02:42.875 SYMLINK libspdk_blob.so 00:02:43.135 CC lib/lvol/lvol.o 00:02:43.135 CC lib/blobfs/blobfs.o 00:02:43.135 CC lib/blobfs/tree.o 00:02:43.707 LIB libspdk_bdev.a 00:02:43.969 SO libspdk_bdev.so.17.0 00:02:43.969 LIB libspdk_blobfs.a 00:02:43.969 SO libspdk_blobfs.so.11.0 00:02:43.969 SYMLINK libspdk_bdev.so 00:02:43.969 LIB libspdk_lvol.a 00:02:43.969 SYMLINK libspdk_blobfs.so 00:02:43.969 SO libspdk_lvol.so.11.0 00:02:44.231 SYMLINK libspdk_lvol.so 00:02:44.231 CC lib/nbd/nbd.o 00:02:44.231 CC lib/nvmf/ctrlr.o 00:02:44.231 CC lib/scsi/dev.o 00:02:44.231 CC lib/ftl/ftl_core.o 00:02:44.231 CC lib/scsi/lun.o 00:02:44.231 CC lib/nvmf/ctrlr_discovery.o 00:02:44.231 CC lib/ftl/ftl_init.o 00:02:44.231 CC lib/nbd/nbd_rpc.o 00:02:44.231 CC lib/scsi/port.o 00:02:44.231 CC lib/nvmf/ctrlr_bdev.o 00:02:44.231 CC lib/nvmf/nvmf.o 00:02:44.231 CC lib/ftl/ftl_layout.o 00:02:44.231 CC lib/scsi/scsi.o 00:02:44.231 CC lib/nvmf/subsystem.o 00:02:44.231 CC lib/scsi/scsi_bdev.o 00:02:44.231 CC lib/ftl/ftl_debug.o 00:02:44.231 CC lib/ftl/ftl_io.o 00:02:44.231 CC lib/nvmf/nvmf_rpc.o 00:02:44.231 CC lib/scsi/scsi_pr.o 00:02:44.231 CC lib/ftl/ftl_sb.o 00:02:44.231 CC lib/scsi/scsi_rpc.o 00:02:44.231 CC lib/nvmf/transport.o 00:02:44.231 CC lib/ftl/ftl_l2p.o 00:02:44.231 CC lib/scsi/task.o 00:02:44.231 CC lib/ublk/ublk.o 00:02:44.231 CC lib/nvmf/tcp.o 00:02:44.231 CC lib/ftl/ftl_l2p_flat.o 00:02:44.231 CC lib/ublk/ublk_rpc.o 00:02:44.231 CC lib/ftl/ftl_nv_cache.o 00:02:44.231 CC lib/nvmf/stubs.o 00:02:44.231 CC lib/nvmf/mdns_server.o 00:02:44.231 CC lib/ftl/ftl_band.o 00:02:44.231 CC lib/nvmf/vfio_user.o 00:02:44.231 CC lib/ftl/ftl_band_ops.o 00:02:44.231 CC lib/ftl/ftl_writer.o 00:02:44.231 CC lib/nvmf/rdma.o 00:02:44.231 CC lib/ftl/ftl_rq.o 00:02:44.231 CC lib/nvmf/auth.o 00:02:44.231 CC lib/ftl/ftl_reloc.o 00:02:44.231 CC lib/ftl/ftl_l2p_cache.o 00:02:44.490 CC lib/ftl/ftl_p2l.o 00:02:44.490 CC lib/ftl/ftl_p2l_log.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:44.490 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:44.490 CC lib/ftl/utils/ftl_conf.o 00:02:44.490 CC lib/ftl/utils/ftl_md.o 00:02:44.490 CC lib/ftl/utils/ftl_mempool.o 00:02:44.490 CC lib/ftl/utils/ftl_bitmap.o 00:02:44.490 CC lib/ftl/utils/ftl_property.o 00:02:44.490 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:44.490 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:44.490 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:44.490 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:44.490 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:44.490 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:44.490 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:44.490 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:44.490 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:44.490 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:44.490 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:44.490 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:44.490 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:44.490 CC lib/ftl/base/ftl_base_bdev.o 00:02:44.490 CC lib/ftl/base/ftl_base_dev.o 00:02:44.490 CC lib/ftl/ftl_trace.o 00:02:44.748 LIB libspdk_nbd.a 00:02:45.007 SO libspdk_nbd.so.7.0 00:02:45.007 LIB libspdk_scsi.a 00:02:45.007 SYMLINK libspdk_nbd.so 00:02:45.007 SO libspdk_scsi.so.9.0 00:02:45.007 SYMLINK libspdk_scsi.so 00:02:45.007 LIB libspdk_ublk.a 00:02:45.007 SO libspdk_ublk.so.3.0 00:02:45.268 SYMLINK libspdk_ublk.so 00:02:45.268 LIB libspdk_ftl.a 00:02:45.268 CC lib/iscsi/conn.o 00:02:45.268 CC lib/iscsi/init_grp.o 00:02:45.268 CC lib/iscsi/iscsi.o 00:02:45.268 CC lib/iscsi/iscsi_subsystem.o 00:02:45.268 CC lib/iscsi/param.o 00:02:45.268 CC lib/iscsi/portal_grp.o 00:02:45.268 CC lib/vhost/vhost.o 00:02:45.528 CC lib/iscsi/tgt_node.o 00:02:45.528 CC lib/iscsi/iscsi_rpc.o 00:02:45.528 CC lib/vhost/vhost_rpc.o 00:02:45.528 CC lib/vhost/vhost_scsi.o 00:02:45.528 CC lib/iscsi/task.o 00:02:45.528 CC lib/vhost/vhost_blk.o 00:02:45.528 CC lib/vhost/rte_vhost_user.o 00:02:45.528 SO libspdk_ftl.so.9.0 00:02:45.788 SYMLINK libspdk_ftl.so 00:02:46.361 LIB libspdk_nvmf.a 00:02:46.361 SO libspdk_nvmf.so.20.0 00:02:46.361 LIB libspdk_vhost.a 00:02:46.361 SO libspdk_vhost.so.8.0 00:02:46.622 SYMLINK libspdk_nvmf.so 00:02:46.622 SYMLINK libspdk_vhost.so 00:02:46.622 LIB libspdk_iscsi.a 00:02:46.622 SO libspdk_iscsi.so.8.0 00:02:46.882 SYMLINK libspdk_iscsi.so 00:02:47.455 CC module/vfu_device/vfu_virtio.o 00:02:47.455 CC module/vfu_device/vfu_virtio_blk.o 00:02:47.455 CC module/vfu_device/vfu_virtio_rpc.o 00:02:47.455 CC module/vfu_device/vfu_virtio_scsi.o 00:02:47.455 CC module/vfu_device/vfu_virtio_fs.o 00:02:47.455 CC module/env_dpdk/env_dpdk_rpc.o 00:02:47.455 CC module/accel/ioat/accel_ioat.o 00:02:47.455 CC module/accel/ioat/accel_ioat_rpc.o 00:02:47.455 LIB libspdk_env_dpdk_rpc.a 00:02:47.455 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:47.455 CC module/accel/iaa/accel_iaa.o 00:02:47.455 CC module/accel/iaa/accel_iaa_rpc.o 00:02:47.455 CC module/accel/error/accel_error.o 00:02:47.455 CC module/accel/error/accel_error_rpc.o 00:02:47.455 CC module/accel/dsa/accel_dsa.o 00:02:47.455 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:47.455 CC module/keyring/file/keyring.o 00:02:47.455 CC module/accel/dsa/accel_dsa_rpc.o 00:02:47.455 CC module/keyring/file/keyring_rpc.o 00:02:47.455 CC module/fsdev/aio/fsdev_aio.o 00:02:47.455 CC module/blob/bdev/blob_bdev.o 00:02:47.455 CC module/scheduler/gscheduler/gscheduler.o 00:02:47.455 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:47.455 CC module/fsdev/aio/linux_aio_mgr.o 00:02:47.455 CC module/sock/posix/posix.o 00:02:47.455 CC module/keyring/linux/keyring.o 00:02:47.455 CC module/keyring/linux/keyring_rpc.o 00:02:47.455 SO libspdk_env_dpdk_rpc.so.6.0 00:02:47.716 SYMLINK libspdk_env_dpdk_rpc.so 00:02:47.716 LIB libspdk_scheduler_dpdk_governor.a 00:02:47.716 LIB libspdk_accel_ioat.a 00:02:47.716 LIB libspdk_keyring_file.a 00:02:47.716 LIB libspdk_scheduler_gscheduler.a 00:02:47.716 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:47.716 LIB libspdk_keyring_linux.a 00:02:47.716 SO libspdk_keyring_file.so.2.0 00:02:47.716 SO libspdk_accel_ioat.so.6.0 00:02:47.716 LIB libspdk_accel_iaa.a 00:02:47.716 SO libspdk_scheduler_gscheduler.so.4.0 00:02:47.716 LIB libspdk_accel_error.a 00:02:47.716 SO libspdk_keyring_linux.so.1.0 00:02:47.716 LIB libspdk_scheduler_dynamic.a 00:02:47.716 SO libspdk_accel_iaa.so.3.0 00:02:47.716 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:47.975 SO libspdk_scheduler_dynamic.so.4.0 00:02:47.975 SYMLINK libspdk_keyring_file.so 00:02:47.975 SO libspdk_accel_error.so.2.0 00:02:47.975 SYMLINK libspdk_accel_ioat.so 00:02:47.975 SYMLINK libspdk_scheduler_gscheduler.so 00:02:47.975 LIB libspdk_accel_dsa.a 00:02:47.975 LIB libspdk_blob_bdev.a 00:02:47.975 SYMLINK libspdk_keyring_linux.so 00:02:47.975 SYMLINK libspdk_accel_iaa.so 00:02:47.975 SYMLINK libspdk_scheduler_dynamic.so 00:02:47.975 SO libspdk_accel_dsa.so.5.0 00:02:47.975 SO libspdk_blob_bdev.so.12.0 00:02:47.975 SYMLINK libspdk_accel_error.so 00:02:47.975 LIB libspdk_vfu_device.a 00:02:47.975 SYMLINK libspdk_accel_dsa.so 00:02:47.975 SYMLINK libspdk_blob_bdev.so 00:02:47.975 SO libspdk_vfu_device.so.3.0 00:02:47.975 SYMLINK libspdk_vfu_device.so 00:02:48.234 LIB libspdk_fsdev_aio.a 00:02:48.234 SO libspdk_fsdev_aio.so.1.0 00:02:48.234 LIB libspdk_sock_posix.a 00:02:48.234 SO libspdk_sock_posix.so.6.0 00:02:48.234 SYMLINK libspdk_fsdev_aio.so 00:02:48.494 SYMLINK libspdk_sock_posix.so 00:02:48.494 CC module/bdev/raid/bdev_raid.o 00:02:48.494 CC module/bdev/raid/bdev_raid_sb.o 00:02:48.494 CC module/bdev/raid/bdev_raid_rpc.o 00:02:48.494 CC module/bdev/raid/raid0.o 00:02:48.494 CC module/bdev/raid/raid1.o 00:02:48.494 CC module/bdev/raid/concat.o 00:02:48.494 CC module/bdev/error/vbdev_error.o 00:02:48.494 CC module/bdev/delay/vbdev_delay.o 00:02:48.494 CC module/bdev/error/vbdev_error_rpc.o 00:02:48.494 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:48.494 CC module/bdev/null/bdev_null.o 00:02:48.494 CC module/bdev/null/bdev_null_rpc.o 00:02:48.494 CC module/bdev/aio/bdev_aio.o 00:02:48.494 CC module/bdev/nvme/bdev_nvme.o 00:02:48.494 CC module/bdev/gpt/gpt.o 00:02:48.494 CC module/bdev/split/vbdev_split.o 00:02:48.494 CC module/bdev/aio/bdev_aio_rpc.o 00:02:48.494 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:48.494 CC module/bdev/nvme/bdev_mdns_client.o 00:02:48.494 CC module/bdev/split/vbdev_split_rpc.o 00:02:48.494 CC module/bdev/gpt/vbdev_gpt.o 00:02:48.494 CC module/bdev/nvme/nvme_rpc.o 00:02:48.494 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:48.494 CC module/bdev/nvme/vbdev_opal.o 00:02:48.494 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:48.494 CC module/bdev/lvol/vbdev_lvol.o 00:02:48.494 CC module/bdev/passthru/vbdev_passthru.o 00:02:48.494 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:48.494 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:48.494 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:48.494 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:48.494 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:48.494 CC module/bdev/iscsi/bdev_iscsi.o 00:02:48.494 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:48.494 CC module/bdev/malloc/bdev_malloc.o 00:02:48.494 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:48.494 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:48.494 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:48.494 CC module/blobfs/bdev/blobfs_bdev.o 00:02:48.494 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:48.494 CC module/bdev/ftl/bdev_ftl.o 00:02:48.495 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:48.754 LIB libspdk_blobfs_bdev.a 00:02:48.754 LIB libspdk_bdev_error.a 00:02:48.754 LIB libspdk_bdev_null.a 00:02:48.754 SO libspdk_blobfs_bdev.so.6.0 00:02:48.754 LIB libspdk_bdev_gpt.a 00:02:48.754 SO libspdk_bdev_null.so.6.0 00:02:48.754 LIB libspdk_bdev_split.a 00:02:48.754 SO libspdk_bdev_error.so.6.0 00:02:48.754 LIB libspdk_bdev_passthru.a 00:02:49.014 LIB libspdk_bdev_ftl.a 00:02:49.014 SO libspdk_bdev_gpt.so.6.0 00:02:49.014 SO libspdk_bdev_split.so.6.0 00:02:49.014 SO libspdk_bdev_passthru.so.6.0 00:02:49.014 SYMLINK libspdk_blobfs_bdev.so 00:02:49.014 LIB libspdk_bdev_aio.a 00:02:49.014 SYMLINK libspdk_bdev_error.so 00:02:49.014 LIB libspdk_bdev_zone_block.a 00:02:49.014 SYMLINK libspdk_bdev_null.so 00:02:49.014 SO libspdk_bdev_ftl.so.6.0 00:02:49.014 LIB libspdk_bdev_delay.a 00:02:49.014 SYMLINK libspdk_bdev_split.so 00:02:49.014 LIB libspdk_bdev_malloc.a 00:02:49.014 LIB libspdk_bdev_iscsi.a 00:02:49.014 SO libspdk_bdev_aio.so.6.0 00:02:49.014 SYMLINK libspdk_bdev_passthru.so 00:02:49.014 SYMLINK libspdk_bdev_gpt.so 00:02:49.014 SO libspdk_bdev_zone_block.so.6.0 00:02:49.014 SO libspdk_bdev_delay.so.6.0 00:02:49.014 SO libspdk_bdev_iscsi.so.6.0 00:02:49.014 SO libspdk_bdev_malloc.so.6.0 00:02:49.014 SYMLINK libspdk_bdev_ftl.so 00:02:49.014 SYMLINK libspdk_bdev_aio.so 00:02:49.014 SYMLINK libspdk_bdev_zone_block.so 00:02:49.014 SYMLINK libspdk_bdev_delay.so 00:02:49.014 SYMLINK libspdk_bdev_iscsi.so 00:02:49.014 SYMLINK libspdk_bdev_malloc.so 00:02:49.014 LIB libspdk_bdev_virtio.a 00:02:49.014 LIB libspdk_bdev_lvol.a 00:02:49.014 SO libspdk_bdev_lvol.so.6.0 00:02:49.014 SO libspdk_bdev_virtio.so.6.0 00:02:49.275 SYMLINK libspdk_bdev_lvol.so 00:02:49.275 SYMLINK libspdk_bdev_virtio.so 00:02:49.536 LIB libspdk_bdev_raid.a 00:02:49.536 SO libspdk_bdev_raid.so.6.0 00:02:49.536 SYMLINK libspdk_bdev_raid.so 00:02:50.919 LIB libspdk_bdev_nvme.a 00:02:50.919 SO libspdk_bdev_nvme.so.7.1 00:02:50.919 SYMLINK libspdk_bdev_nvme.so 00:02:51.863 CC module/event/subsystems/sock/sock.o 00:02:51.863 CC module/event/subsystems/iobuf/iobuf.o 00:02:51.863 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:51.863 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:51.863 CC module/event/subsystems/vmd/vmd.o 00:02:51.863 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:51.863 CC module/event/subsystems/fsdev/fsdev.o 00:02:51.863 CC module/event/subsystems/scheduler/scheduler.o 00:02:51.863 CC module/event/subsystems/keyring/keyring.o 00:02:51.863 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:51.863 LIB libspdk_event_vmd.a 00:02:51.863 LIB libspdk_event_vhost_blk.a 00:02:51.863 LIB libspdk_event_sock.a 00:02:51.863 LIB libspdk_event_vfu_tgt.a 00:02:51.863 LIB libspdk_event_fsdev.a 00:02:51.863 LIB libspdk_event_iobuf.a 00:02:51.863 LIB libspdk_event_keyring.a 00:02:51.863 LIB libspdk_event_scheduler.a 00:02:51.863 SO libspdk_event_sock.so.5.0 00:02:51.863 SO libspdk_event_vhost_blk.so.3.0 00:02:51.863 SO libspdk_event_vmd.so.6.0 00:02:51.863 SO libspdk_event_vfu_tgt.so.3.0 00:02:51.863 SO libspdk_event_iobuf.so.3.0 00:02:51.863 SO libspdk_event_fsdev.so.1.0 00:02:51.863 SO libspdk_event_keyring.so.1.0 00:02:51.863 SO libspdk_event_scheduler.so.4.0 00:02:51.863 SYMLINK libspdk_event_sock.so 00:02:51.863 SYMLINK libspdk_event_vhost_blk.so 00:02:51.863 SYMLINK libspdk_event_vfu_tgt.so 00:02:51.863 SYMLINK libspdk_event_fsdev.so 00:02:51.863 SYMLINK libspdk_event_iobuf.so 00:02:51.863 SYMLINK libspdk_event_vmd.so 00:02:51.863 SYMLINK libspdk_event_keyring.so 00:02:51.863 SYMLINK libspdk_event_scheduler.so 00:02:52.434 CC module/event/subsystems/accel/accel.o 00:02:52.434 LIB libspdk_event_accel.a 00:02:52.434 SO libspdk_event_accel.so.6.0 00:02:52.434 SYMLINK libspdk_event_accel.so 00:02:53.009 CC module/event/subsystems/bdev/bdev.o 00:02:53.009 LIB libspdk_event_bdev.a 00:02:53.009 SO libspdk_event_bdev.so.6.0 00:02:53.270 SYMLINK libspdk_event_bdev.so 00:02:53.529 CC module/event/subsystems/scsi/scsi.o 00:02:53.529 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:53.530 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:53.530 CC module/event/subsystems/ublk/ublk.o 00:02:53.530 CC module/event/subsystems/nbd/nbd.o 00:02:53.789 LIB libspdk_event_ublk.a 00:02:53.789 LIB libspdk_event_nbd.a 00:02:53.789 LIB libspdk_event_scsi.a 00:02:53.789 SO libspdk_event_ublk.so.3.0 00:02:53.789 SO libspdk_event_nbd.so.6.0 00:02:53.789 SO libspdk_event_scsi.so.6.0 00:02:53.789 LIB libspdk_event_nvmf.a 00:02:53.789 SYMLINK libspdk_event_scsi.so 00:02:53.789 SYMLINK libspdk_event_ublk.so 00:02:53.789 SYMLINK libspdk_event_nbd.so 00:02:53.789 SO libspdk_event_nvmf.so.6.0 00:02:54.049 SYMLINK libspdk_event_nvmf.so 00:02:54.049 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:54.331 CC module/event/subsystems/iscsi/iscsi.o 00:02:54.331 LIB libspdk_event_vhost_scsi.a 00:02:54.331 LIB libspdk_event_iscsi.a 00:02:54.331 SO libspdk_event_vhost_scsi.so.3.0 00:02:54.331 SO libspdk_event_iscsi.so.6.0 00:02:54.642 SYMLINK libspdk_event_vhost_scsi.so 00:02:54.642 SYMLINK libspdk_event_iscsi.so 00:02:54.642 SO libspdk.so.6.0 00:02:54.642 SYMLINK libspdk.so 00:02:54.965 CC test/rpc_client/rpc_client_test.o 00:02:54.965 CC app/trace_record/trace_record.o 00:02:54.965 CXX app/trace/trace.o 00:02:54.965 CC app/spdk_nvme_perf/perf.o 00:02:54.965 TEST_HEADER include/spdk/accel.h 00:02:54.965 TEST_HEADER include/spdk/accel_module.h 00:02:54.965 TEST_HEADER include/spdk/assert.h 00:02:54.965 CC app/spdk_lspci/spdk_lspci.o 00:02:54.965 TEST_HEADER include/spdk/barrier.h 00:02:54.965 TEST_HEADER include/spdk/base64.h 00:02:54.965 TEST_HEADER include/spdk/bdev.h 00:02:54.965 CC app/spdk_top/spdk_top.o 00:02:54.965 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:54.965 TEST_HEADER include/spdk/bdev_module.h 00:02:54.965 TEST_HEADER include/spdk/bdev_zone.h 00:02:55.227 TEST_HEADER include/spdk/bit_array.h 00:02:55.227 TEST_HEADER include/spdk/blob_bdev.h 00:02:55.227 TEST_HEADER include/spdk/bit_pool.h 00:02:55.227 CC app/spdk_nvme_discover/discovery_aer.o 00:02:55.227 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:55.227 TEST_HEADER include/spdk/blobfs.h 00:02:55.227 TEST_HEADER include/spdk/blob.h 00:02:55.227 TEST_HEADER include/spdk/conf.h 00:02:55.227 TEST_HEADER include/spdk/config.h 00:02:55.227 TEST_HEADER include/spdk/cpuset.h 00:02:55.227 TEST_HEADER include/spdk/crc16.h 00:02:55.227 TEST_HEADER include/spdk/dif.h 00:02:55.227 TEST_HEADER include/spdk/crc32.h 00:02:55.227 TEST_HEADER include/spdk/crc64.h 00:02:55.227 CC app/spdk_nvme_identify/identify.o 00:02:55.227 TEST_HEADER include/spdk/dma.h 00:02:55.227 TEST_HEADER include/spdk/endian.h 00:02:55.227 TEST_HEADER include/spdk/env.h 00:02:55.227 TEST_HEADER include/spdk/env_dpdk.h 00:02:55.227 TEST_HEADER include/spdk/event.h 00:02:55.227 TEST_HEADER include/spdk/fd_group.h 00:02:55.227 TEST_HEADER include/spdk/fd.h 00:02:55.227 TEST_HEADER include/spdk/file.h 00:02:55.228 TEST_HEADER include/spdk/fsdev.h 00:02:55.228 TEST_HEADER include/spdk/fsdev_module.h 00:02:55.228 TEST_HEADER include/spdk/ftl.h 00:02:55.228 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:55.228 TEST_HEADER include/spdk/gpt_spec.h 00:02:55.228 TEST_HEADER include/spdk/hexlify.h 00:02:55.228 TEST_HEADER include/spdk/histogram_data.h 00:02:55.228 TEST_HEADER include/spdk/idxd.h 00:02:55.228 TEST_HEADER include/spdk/idxd_spec.h 00:02:55.228 TEST_HEADER include/spdk/ioat.h 00:02:55.228 TEST_HEADER include/spdk/init.h 00:02:55.228 TEST_HEADER include/spdk/ioat_spec.h 00:02:55.228 TEST_HEADER include/spdk/iscsi_spec.h 00:02:55.228 TEST_HEADER include/spdk/json.h 00:02:55.228 CC app/nvmf_tgt/nvmf_main.o 00:02:55.228 TEST_HEADER include/spdk/jsonrpc.h 00:02:55.228 TEST_HEADER include/spdk/keyring.h 00:02:55.228 TEST_HEADER include/spdk/keyring_module.h 00:02:55.228 TEST_HEADER include/spdk/likely.h 00:02:55.228 TEST_HEADER include/spdk/md5.h 00:02:55.228 TEST_HEADER include/spdk/log.h 00:02:55.228 TEST_HEADER include/spdk/lvol.h 00:02:55.228 TEST_HEADER include/spdk/nbd.h 00:02:55.228 TEST_HEADER include/spdk/memory.h 00:02:55.228 TEST_HEADER include/spdk/mmio.h 00:02:55.228 CC app/spdk_dd/spdk_dd.o 00:02:55.228 TEST_HEADER include/spdk/notify.h 00:02:55.228 TEST_HEADER include/spdk/net.h 00:02:55.228 TEST_HEADER include/spdk/nvme.h 00:02:55.228 CC app/iscsi_tgt/iscsi_tgt.o 00:02:55.228 TEST_HEADER include/spdk/nvme_intel.h 00:02:55.228 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:55.228 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:55.228 TEST_HEADER include/spdk/nvme_zns.h 00:02:55.228 TEST_HEADER include/spdk/nvme_spec.h 00:02:55.228 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:55.228 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:55.228 CC app/spdk_tgt/spdk_tgt.o 00:02:55.228 TEST_HEADER include/spdk/nvmf.h 00:02:55.228 TEST_HEADER include/spdk/nvmf_spec.h 00:02:55.228 TEST_HEADER include/spdk/nvmf_transport.h 00:02:55.228 TEST_HEADER include/spdk/opal.h 00:02:55.228 TEST_HEADER include/spdk/opal_spec.h 00:02:55.228 TEST_HEADER include/spdk/pci_ids.h 00:02:55.228 TEST_HEADER include/spdk/pipe.h 00:02:55.228 TEST_HEADER include/spdk/queue.h 00:02:55.228 TEST_HEADER include/spdk/reduce.h 00:02:55.228 TEST_HEADER include/spdk/scheduler.h 00:02:55.228 TEST_HEADER include/spdk/rpc.h 00:02:55.228 TEST_HEADER include/spdk/scsi.h 00:02:55.228 TEST_HEADER include/spdk/sock.h 00:02:55.228 TEST_HEADER include/spdk/scsi_spec.h 00:02:55.228 TEST_HEADER include/spdk/stdinc.h 00:02:55.228 TEST_HEADER include/spdk/string.h 00:02:55.228 TEST_HEADER include/spdk/thread.h 00:02:55.228 TEST_HEADER include/spdk/trace.h 00:02:55.228 TEST_HEADER include/spdk/trace_parser.h 00:02:55.228 TEST_HEADER include/spdk/tree.h 00:02:55.228 TEST_HEADER include/spdk/ublk.h 00:02:55.228 TEST_HEADER include/spdk/util.h 00:02:55.228 TEST_HEADER include/spdk/uuid.h 00:02:55.228 TEST_HEADER include/spdk/version.h 00:02:55.228 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:55.228 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:55.228 TEST_HEADER include/spdk/vmd.h 00:02:55.228 TEST_HEADER include/spdk/vhost.h 00:02:55.228 TEST_HEADER include/spdk/xor.h 00:02:55.228 TEST_HEADER include/spdk/zipf.h 00:02:55.228 CXX test/cpp_headers/accel_module.o 00:02:55.228 CXX test/cpp_headers/accel.o 00:02:55.228 CXX test/cpp_headers/barrier.o 00:02:55.228 CXX test/cpp_headers/assert.o 00:02:55.228 CXX test/cpp_headers/bdev.o 00:02:55.228 CXX test/cpp_headers/base64.o 00:02:55.228 CXX test/cpp_headers/bdev_module.o 00:02:55.228 CXX test/cpp_headers/bdev_zone.o 00:02:55.228 CXX test/cpp_headers/bit_array.o 00:02:55.228 CXX test/cpp_headers/bit_pool.o 00:02:55.228 CXX test/cpp_headers/blob_bdev.o 00:02:55.228 CXX test/cpp_headers/blobfs.o 00:02:55.228 CXX test/cpp_headers/blobfs_bdev.o 00:02:55.228 CXX test/cpp_headers/blob.o 00:02:55.228 CXX test/cpp_headers/conf.o 00:02:55.228 CXX test/cpp_headers/config.o 00:02:55.228 CXX test/cpp_headers/cpuset.o 00:02:55.228 CXX test/cpp_headers/crc16.o 00:02:55.228 CXX test/cpp_headers/crc32.o 00:02:55.228 CXX test/cpp_headers/crc64.o 00:02:55.228 CXX test/cpp_headers/dif.o 00:02:55.228 CXX test/cpp_headers/dma.o 00:02:55.228 CXX test/cpp_headers/endian.o 00:02:55.228 CXX test/cpp_headers/env_dpdk.o 00:02:55.228 CXX test/cpp_headers/event.o 00:02:55.228 CXX test/cpp_headers/fd_group.o 00:02:55.228 CXX test/cpp_headers/env.o 00:02:55.228 CXX test/cpp_headers/fd.o 00:02:55.228 CXX test/cpp_headers/fsdev.o 00:02:55.228 CXX test/cpp_headers/file.o 00:02:55.228 CXX test/cpp_headers/ftl.o 00:02:55.228 CXX test/cpp_headers/fsdev_module.o 00:02:55.228 CXX test/cpp_headers/fuse_dispatcher.o 00:02:55.228 CXX test/cpp_headers/gpt_spec.o 00:02:55.228 CXX test/cpp_headers/idxd.o 00:02:55.228 CXX test/cpp_headers/hexlify.o 00:02:55.228 CXX test/cpp_headers/init.o 00:02:55.228 CXX test/cpp_headers/histogram_data.o 00:02:55.228 CXX test/cpp_headers/idxd_spec.o 00:02:55.228 CXX test/cpp_headers/ioat.o 00:02:55.228 CXX test/cpp_headers/ioat_spec.o 00:02:55.228 CXX test/cpp_headers/iscsi_spec.o 00:02:55.228 CXX test/cpp_headers/keyring.o 00:02:55.228 CXX test/cpp_headers/jsonrpc.o 00:02:55.228 CXX test/cpp_headers/json.o 00:02:55.228 CXX test/cpp_headers/likely.o 00:02:55.228 CXX test/cpp_headers/keyring_module.o 00:02:55.228 CXX test/cpp_headers/lvol.o 00:02:55.228 CXX test/cpp_headers/log.o 00:02:55.228 CXX test/cpp_headers/md5.o 00:02:55.228 CXX test/cpp_headers/nbd.o 00:02:55.228 CXX test/cpp_headers/memory.o 00:02:55.228 CXX test/cpp_headers/nvme.o 00:02:55.228 CXX test/cpp_headers/notify.o 00:02:55.228 CXX test/cpp_headers/net.o 00:02:55.228 CXX test/cpp_headers/mmio.o 00:02:55.228 CXX test/cpp_headers/nvme_ocssd.o 00:02:55.228 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:55.228 CXX test/cpp_headers/nvme_intel.o 00:02:55.228 CXX test/cpp_headers/nvme_spec.o 00:02:55.228 CXX test/cpp_headers/nvme_zns.o 00:02:55.228 CXX test/cpp_headers/nvmf.o 00:02:55.228 CXX test/cpp_headers/nvmf_cmd.o 00:02:55.228 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:55.228 CXX test/cpp_headers/nvmf_spec.o 00:02:55.228 CXX test/cpp_headers/nvmf_transport.o 00:02:55.228 CXX test/cpp_headers/opal.o 00:02:55.228 CXX test/cpp_headers/opal_spec.o 00:02:55.228 CXX test/cpp_headers/pci_ids.o 00:02:55.228 CC examples/util/zipf/zipf.o 00:02:55.228 CC test/env/memory/memory_ut.o 00:02:55.228 CXX test/cpp_headers/pipe.o 00:02:55.228 CXX test/cpp_headers/queue.o 00:02:55.228 CXX test/cpp_headers/reduce.o 00:02:55.228 CC test/thread/poller_perf/poller_perf.o 00:02:55.228 CC examples/ioat/verify/verify.o 00:02:55.228 LINK spdk_lspci 00:02:55.228 CXX test/cpp_headers/rpc.o 00:02:55.228 CC test/env/vtophys/vtophys.o 00:02:55.228 CXX test/cpp_headers/scheduler.o 00:02:55.228 CXX test/cpp_headers/scsi.o 00:02:55.228 CXX test/cpp_headers/sock.o 00:02:55.489 CXX test/cpp_headers/stdinc.o 00:02:55.489 CXX test/cpp_headers/scsi_spec.o 00:02:55.489 CC test/app/jsoncat/jsoncat.o 00:02:55.489 CXX test/cpp_headers/string.o 00:02:55.489 CXX test/cpp_headers/trace.o 00:02:55.489 CXX test/cpp_headers/thread.o 00:02:55.489 CXX test/cpp_headers/trace_parser.o 00:02:55.489 CXX test/cpp_headers/util.o 00:02:55.489 CC test/env/pci/pci_ut.o 00:02:55.489 CXX test/cpp_headers/tree.o 00:02:55.489 CXX test/cpp_headers/ublk.o 00:02:55.489 CXX test/cpp_headers/uuid.o 00:02:55.489 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:55.489 CC examples/ioat/perf/perf.o 00:02:55.489 CXX test/cpp_headers/vfio_user_pci.o 00:02:55.489 CC test/app/histogram_perf/histogram_perf.o 00:02:55.489 CXX test/cpp_headers/version.o 00:02:55.489 CC test/dma/test_dma/test_dma.o 00:02:55.489 CXX test/cpp_headers/vmd.o 00:02:55.489 CXX test/cpp_headers/vfio_user_spec.o 00:02:55.489 CC test/app/stub/stub.o 00:02:55.489 CXX test/cpp_headers/vhost.o 00:02:55.489 CXX test/cpp_headers/xor.o 00:02:55.489 CXX test/cpp_headers/zipf.o 00:02:55.489 CC app/fio/nvme/fio_plugin.o 00:02:55.489 LINK interrupt_tgt 00:02:55.489 LINK rpc_client_test 00:02:55.489 CC test/app/bdev_svc/bdev_svc.o 00:02:55.489 CC app/fio/bdev/fio_plugin.o 00:02:55.490 LINK spdk_nvme_discover 00:02:55.490 LINK nvmf_tgt 00:02:55.749 LINK spdk_trace_record 00:02:55.749 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:55.749 LINK spdk_tgt 00:02:55.749 LINK iscsi_tgt 00:02:55.749 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:55.749 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:55.749 CC test/env/mem_callbacks/mem_callbacks.o 00:02:55.749 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:56.006 LINK stub 00:02:56.006 LINK spdk_dd 00:02:56.006 LINK vtophys 00:02:56.006 LINK zipf 00:02:56.006 LINK poller_perf 00:02:56.006 LINK jsoncat 00:02:56.006 LINK verify 00:02:56.006 LINK env_dpdk_post_init 00:02:56.006 LINK histogram_perf 00:02:56.006 LINK ioat_perf 00:02:56.006 LINK spdk_trace 00:02:56.006 LINK bdev_svc 00:02:56.265 LINK test_dma 00:02:56.265 LINK spdk_nvme_perf 00:02:56.265 LINK pci_ut 00:02:56.265 CC test/event/event_perf/event_perf.o 00:02:56.265 LINK spdk_top 00:02:56.265 CC test/event/reactor_perf/reactor_perf.o 00:02:56.265 CC test/event/reactor/reactor.o 00:02:56.265 LINK spdk_bdev 00:02:56.265 LINK nvme_fuzz 00:02:56.526 CC test/event/app_repeat/app_repeat.o 00:02:56.526 LINK vhost_fuzz 00:02:56.526 LINK spdk_nvme 00:02:56.526 CC examples/vmd/led/led.o 00:02:56.526 CC examples/sock/hello_world/hello_sock.o 00:02:56.526 CC test/event/scheduler/scheduler.o 00:02:56.526 CC examples/vmd/lsvmd/lsvmd.o 00:02:56.526 CC examples/idxd/perf/perf.o 00:02:56.526 CC app/vhost/vhost.o 00:02:56.526 CC examples/thread/thread/thread_ex.o 00:02:56.526 LINK spdk_nvme_identify 00:02:56.526 LINK mem_callbacks 00:02:56.526 LINK reactor_perf 00:02:56.526 LINK event_perf 00:02:56.526 LINK reactor 00:02:56.526 LINK app_repeat 00:02:56.526 LINK led 00:02:56.526 LINK lsvmd 00:02:56.788 LINK hello_sock 00:02:56.788 LINK vhost 00:02:56.788 CC test/nvme/reset/reset.o 00:02:56.788 CC test/nvme/sgl/sgl.o 00:02:56.788 CC test/nvme/compliance/nvme_compliance.o 00:02:56.788 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:56.788 CC test/nvme/e2edp/nvme_dp.o 00:02:56.788 CC test/nvme/startup/startup.o 00:02:56.788 CC test/nvme/simple_copy/simple_copy.o 00:02:56.788 CC test/nvme/aer/aer.o 00:02:56.788 LINK scheduler 00:02:56.788 CC test/nvme/overhead/overhead.o 00:02:56.788 CC test/nvme/fused_ordering/fused_ordering.o 00:02:56.788 CC test/nvme/fdp/fdp.o 00:02:56.788 CC test/nvme/cuse/cuse.o 00:02:56.788 CC test/nvme/reserve/reserve.o 00:02:56.788 CC test/nvme/err_injection/err_injection.o 00:02:56.788 CC test/nvme/boot_partition/boot_partition.o 00:02:56.788 CC test/nvme/connect_stress/connect_stress.o 00:02:56.788 CC test/blobfs/mkfs/mkfs.o 00:02:56.788 CC test/accel/dif/dif.o 00:02:56.788 LINK idxd_perf 00:02:56.788 LINK thread 00:02:56.788 CC test/lvol/esnap/esnap.o 00:02:56.788 LINK connect_stress 00:02:56.788 LINK startup 00:02:57.049 LINK fused_ordering 00:02:57.049 LINK err_injection 00:02:57.049 LINK boot_partition 00:02:57.049 LINK memory_ut 00:02:57.049 LINK doorbell_aers 00:02:57.049 LINK simple_copy 00:02:57.049 LINK reserve 00:02:57.049 LINK reset 00:02:57.049 LINK sgl 00:02:57.049 LINK nvme_dp 00:02:57.049 LINK mkfs 00:02:57.049 LINK aer 00:02:57.049 LINK nvme_compliance 00:02:57.049 LINK overhead 00:02:57.049 LINK fdp 00:02:57.049 CC examples/nvme/hello_world/hello_world.o 00:02:57.049 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:57.049 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:57.049 CC examples/nvme/arbitration/arbitration.o 00:02:57.049 CC examples/nvme/reconnect/reconnect.o 00:02:57.049 CC examples/nvme/hotplug/hotplug.o 00:02:57.049 CC examples/nvme/abort/abort.o 00:02:57.049 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:57.309 LINK iscsi_fuzz 00:02:57.309 CC examples/accel/perf/accel_perf.o 00:02:57.309 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:57.309 LINK cmb_copy 00:02:57.309 LINK hello_world 00:02:57.309 CC examples/blob/hello_world/hello_blob.o 00:02:57.309 LINK pmr_persistence 00:02:57.309 LINK dif 00:02:57.309 CC examples/blob/cli/blobcli.o 00:02:57.309 LINK hotplug 00:02:57.570 LINK arbitration 00:02:57.570 LINK reconnect 00:02:57.570 LINK abort 00:02:57.570 LINK nvme_manage 00:02:57.570 LINK hello_blob 00:02:57.570 LINK hello_fsdev 00:02:57.831 LINK accel_perf 00:02:57.831 LINK blobcli 00:02:57.831 CC test/bdev/bdevio/bdevio.o 00:02:57.831 LINK cuse 00:02:58.402 LINK bdevio 00:02:58.402 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.402 CC examples/bdev/bdevperf/bdevperf.o 00:02:58.663 LINK hello_bdev 00:02:59.234 LINK bdevperf 00:02:59.806 CC examples/nvmf/nvmf/nvmf.o 00:03:00.067 LINK nvmf 00:03:01.009 LINK esnap 00:03:01.582 00:03:01.582 real 0m54.309s 00:03:01.582 user 7m47.539s 00:03:01.582 sys 4m25.635s 00:03:01.582 13:07:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:01.582 13:07:23 make -- common/autotest_common.sh@10 -- $ set +x 00:03:01.582 ************************************ 00:03:01.582 END TEST make 00:03:01.582 ************************************ 00:03:01.582 13:07:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:01.582 13:07:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:01.582 13:07:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:01.582 13:07:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.582 13:07:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:01.582 13:07:23 -- pm/common@44 -- $ pid=586021 00:03:01.582 13:07:23 -- pm/common@50 -- $ kill -TERM 586021 00:03:01.582 13:07:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.582 13:07:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:01.582 13:07:23 -- pm/common@44 -- $ pid=586022 00:03:01.582 13:07:23 -- pm/common@50 -- $ kill -TERM 586022 00:03:01.582 13:07:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.582 13:07:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:01.582 13:07:23 -- pm/common@44 -- $ pid=586025 00:03:01.582 13:07:23 -- pm/common@50 -- $ kill -TERM 586025 00:03:01.582 13:07:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.582 13:07:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:01.582 13:07:23 -- pm/common@44 -- $ pid=586049 00:03:01.582 13:07:23 -- pm/common@50 -- $ sudo -E kill -TERM 586049 00:03:01.582 13:07:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:01.582 13:07:23 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:01.582 13:07:24 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:01.582 13:07:24 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:01.582 13:07:24 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:01.582 13:07:24 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:01.582 13:07:24 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:01.582 13:07:24 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:01.582 13:07:24 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:01.582 13:07:24 -- scripts/common.sh@336 -- # IFS=.-: 00:03:01.582 13:07:24 -- scripts/common.sh@336 -- # read -ra ver1 00:03:01.582 13:07:24 -- scripts/common.sh@337 -- # IFS=.-: 00:03:01.582 13:07:24 -- scripts/common.sh@337 -- # read -ra ver2 00:03:01.582 13:07:24 -- scripts/common.sh@338 -- # local 'op=<' 00:03:01.582 13:07:24 -- scripts/common.sh@340 -- # ver1_l=2 00:03:01.582 13:07:24 -- scripts/common.sh@341 -- # ver2_l=1 00:03:01.582 13:07:24 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:01.582 13:07:24 -- scripts/common.sh@344 -- # case "$op" in 00:03:01.582 13:07:24 -- scripts/common.sh@345 -- # : 1 00:03:01.582 13:07:24 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:01.582 13:07:24 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:01.582 13:07:24 -- scripts/common.sh@365 -- # decimal 1 00:03:01.582 13:07:24 -- scripts/common.sh@353 -- # local d=1 00:03:01.582 13:07:24 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:01.582 13:07:24 -- scripts/common.sh@355 -- # echo 1 00:03:01.582 13:07:24 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:01.582 13:07:24 -- scripts/common.sh@366 -- # decimal 2 00:03:01.582 13:07:24 -- scripts/common.sh@353 -- # local d=2 00:03:01.582 13:07:24 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:01.582 13:07:24 -- scripts/common.sh@355 -- # echo 2 00:03:01.582 13:07:24 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:01.582 13:07:24 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:01.582 13:07:24 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:01.582 13:07:24 -- scripts/common.sh@368 -- # return 0 00:03:01.582 13:07:24 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:01.582 13:07:24 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.582 --rc genhtml_branch_coverage=1 00:03:01.582 --rc genhtml_function_coverage=1 00:03:01.582 --rc genhtml_legend=1 00:03:01.582 --rc geninfo_all_blocks=1 00:03:01.582 --rc geninfo_unexecuted_blocks=1 00:03:01.582 00:03:01.582 ' 00:03:01.582 13:07:24 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.582 --rc genhtml_branch_coverage=1 00:03:01.582 --rc genhtml_function_coverage=1 00:03:01.582 --rc genhtml_legend=1 00:03:01.582 --rc geninfo_all_blocks=1 00:03:01.582 --rc geninfo_unexecuted_blocks=1 00:03:01.582 00:03:01.582 ' 00:03:01.582 13:07:24 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.582 --rc genhtml_branch_coverage=1 00:03:01.582 --rc genhtml_function_coverage=1 00:03:01.582 --rc genhtml_legend=1 00:03:01.582 --rc geninfo_all_blocks=1 00:03:01.582 --rc geninfo_unexecuted_blocks=1 00:03:01.582 00:03:01.582 ' 00:03:01.582 13:07:24 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:01.582 --rc genhtml_branch_coverage=1 00:03:01.582 --rc genhtml_function_coverage=1 00:03:01.582 --rc genhtml_legend=1 00:03:01.582 --rc geninfo_all_blocks=1 00:03:01.582 --rc geninfo_unexecuted_blocks=1 00:03:01.582 00:03:01.582 ' 00:03:01.582 13:07:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:01.582 13:07:24 -- nvmf/common.sh@7 -- # uname -s 00:03:01.582 13:07:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:01.582 13:07:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:01.582 13:07:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:01.582 13:07:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:01.582 13:07:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:01.582 13:07:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:01.582 13:07:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:01.582 13:07:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:01.582 13:07:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:01.582 13:07:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:01.844 13:07:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:01.844 13:07:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:03:01.844 13:07:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:01.844 13:07:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:01.844 13:07:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:01.844 13:07:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:01.844 13:07:24 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:01.844 13:07:24 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:01.844 13:07:24 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:01.844 13:07:24 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:01.844 13:07:24 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:01.844 13:07:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.844 13:07:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.844 13:07:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.844 13:07:24 -- paths/export.sh@5 -- # export PATH 00:03:01.844 13:07:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:01.844 13:07:24 -- nvmf/common.sh@51 -- # : 0 00:03:01.844 13:07:24 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:01.844 13:07:24 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:01.844 13:07:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:01.844 13:07:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:01.844 13:07:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:01.845 13:07:24 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:01.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:01.845 13:07:24 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:01.845 13:07:24 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:01.845 13:07:24 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:01.845 13:07:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:01.845 13:07:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:01.845 13:07:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:01.845 13:07:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:01.845 13:07:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.845 13:07:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:01.845 13:07:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:01.845 13:07:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:01.845 13:07:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:01.845 13:07:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:01.845 13:07:24 -- spdk/autotest.sh@48 -- # udevadm_pid=651922 00:03:01.845 13:07:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:01.845 13:07:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:01.845 13:07:24 -- pm/common@17 -- # local monitor 00:03:01.845 13:07:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.845 13:07:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.845 13:07:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.845 13:07:24 -- pm/common@21 -- # date +%s 00:03:01.845 13:07:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:01.845 13:07:24 -- pm/common@21 -- # date +%s 00:03:01.845 13:07:24 -- pm/common@25 -- # sleep 1 00:03:01.845 13:07:24 -- pm/common@21 -- # date +%s 00:03:01.845 13:07:24 -- pm/common@21 -- # date +%s 00:03:01.845 13:07:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733400444 00:03:01.845 13:07:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733400444 00:03:01.845 13:07:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733400444 00:03:01.845 13:07:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733400444 00:03:01.845 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733400444_collect-vmstat.pm.log 00:03:01.845 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733400444_collect-cpu-load.pm.log 00:03:01.845 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733400444_collect-cpu-temp.pm.log 00:03:01.845 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733400444_collect-bmc-pm.bmc.pm.log 00:03:02.785 13:07:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:02.785 13:07:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:02.785 13:07:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:02.785 13:07:25 -- common/autotest_common.sh@10 -- # set +x 00:03:02.785 13:07:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:02.785 13:07:25 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:02.785 13:07:25 -- common/autotest_common.sh@10 -- # set +x 00:03:02.785 13:07:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:02.785 13:07:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.785 13:07:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.785 13:07:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:02.785 13:07:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:02.785 13:07:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:02.785 13:07:25 -- common/autotest_common.sh@1457 -- # uname 00:03:02.785 13:07:25 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:02.785 13:07:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:02.785 13:07:25 -- common/autotest_common.sh@1477 -- # uname 00:03:02.785 13:07:25 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:02.785 13:07:25 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:02.785 13:07:25 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:02.785 lcov: LCOV version 1.15 00:03:03.045 13:07:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:17.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:17.973 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:36.115 13:07:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:36.115 13:07:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.115 13:07:55 -- common/autotest_common.sh@10 -- # set +x 00:03:36.115 13:07:55 -- spdk/autotest.sh@78 -- # rm -f 00:03:36.115 13:07:55 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:37.061 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:37.061 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:37.061 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:37.322 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:37.322 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:37.583 13:07:59 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:37.583 13:07:59 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:37.583 13:07:59 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:37.583 13:07:59 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:37.583 13:07:59 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:37.583 13:07:59 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:37.583 13:07:59 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:37.583 13:07:59 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:37.583 13:07:59 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:37.583 13:07:59 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:37.583 13:07:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:37.583 13:07:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:37.583 13:07:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:37.583 13:07:59 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:37.583 13:07:59 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:37.583 No valid GPT data, bailing 00:03:37.583 13:08:00 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:37.583 13:08:00 -- scripts/common.sh@394 -- # pt= 00:03:37.583 13:08:00 -- scripts/common.sh@395 -- # return 1 00:03:37.583 13:08:00 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:37.583 1+0 records in 00:03:37.583 1+0 records out 00:03:37.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00190934 s, 549 MB/s 00:03:37.583 13:08:00 -- spdk/autotest.sh@105 -- # sync 00:03:37.583 13:08:00 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:37.583 13:08:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:37.584 13:08:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:47.588 13:08:08 -- spdk/autotest.sh@111 -- # uname -s 00:03:47.588 13:08:08 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:47.588 13:08:08 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:47.588 13:08:08 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:50.134 Hugepages 00:03:50.134 node hugesize free / total 00:03:50.134 node0 1048576kB 0 / 0 00:03:50.134 node0 2048kB 0 / 0 00:03:50.134 node1 1048576kB 0 / 0 00:03:50.134 node1 2048kB 0 / 0 00:03:50.134 00:03:50.134 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.134 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:50.134 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:50.134 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:50.134 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:50.134 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:50.134 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:50.134 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:50.134 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:50.134 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:50.134 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:50.134 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:50.134 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:50.134 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:50.134 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:50.134 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:50.134 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:50.134 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:50.134 13:08:12 -- spdk/autotest.sh@117 -- # uname -s 00:03:50.134 13:08:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:50.134 13:08:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:50.134 13:08:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.435 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:53.435 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:53.435 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:53.435 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:53.435 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:53.435 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:53.694 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:55.602 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:55.862 13:08:18 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:56.805 13:08:19 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:56.805 13:08:19 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:56.805 13:08:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:56.805 13:08:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:56.805 13:08:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:56.805 13:08:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:56.805 13:08:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:56.805 13:08:19 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:56.805 13:08:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:56.805 13:08:19 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:56.805 13:08:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:03:56.805 13:08:19 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.008 Waiting for block devices as requested 00:04:01.008 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:01.008 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:01.008 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:01.008 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:01.008 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:01.268 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:01.268 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:01.268 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:01.529 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:01.529 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:01.790 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:01.790 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:01.790 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:01.790 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:02.051 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:02.051 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:02.051 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:02.311 13:08:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:02.311 13:08:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:04:02.311 13:08:24 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:02.311 13:08:24 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:02.311 13:08:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:02.311 13:08:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:02.311 13:08:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:04:02.311 13:08:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:02.311 13:08:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:02.311 13:08:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:02.311 13:08:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:02.311 13:08:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:02.311 13:08:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:02.311 13:08:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:02.311 13:08:24 -- common/autotest_common.sh@1543 -- # continue 00:04:02.311 13:08:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:02.311 13:08:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.311 13:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:02.573 13:08:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:02.573 13:08:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.573 13:08:24 -- common/autotest_common.sh@10 -- # set +x 00:04:02.573 13:08:24 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.775 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:06.775 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:06.775 13:08:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:06.775 13:08:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.775 13:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:06.775 13:08:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:06.775 13:08:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:06.775 13:08:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:06.775 13:08:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:06.775 13:08:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:06.775 13:08:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:06.775 13:08:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:06.775 13:08:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:06.775 13:08:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:06.775 13:08:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:06.775 13:08:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:06.775 13:08:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:06.775 13:08:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:07.036 13:08:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:07.036 13:08:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:04:07.036 13:08:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.036 13:08:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:07.036 13:08:29 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:04:07.036 13:08:29 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:07.036 13:08:29 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:07.036 13:08:29 -- common/autotest_common.sh@1572 -- # return 0 00:04:07.036 13:08:29 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:07.036 13:08:29 -- common/autotest_common.sh@1580 -- # return 0 00:04:07.036 13:08:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:07.036 13:08:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:07.036 13:08:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.036 13:08:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.036 13:08:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:07.036 13:08:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.036 13:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:07.036 13:08:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:07.036 13:08:29 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.036 13:08:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.036 13:08:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.036 13:08:29 -- common/autotest_common.sh@10 -- # set +x 00:04:07.036 ************************************ 00:04:07.036 START TEST env 00:04:07.036 ************************************ 00:04:07.036 13:08:29 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:07.036 * Looking for test storage... 00:04:07.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:07.036 13:08:29 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:07.036 13:08:29 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:07.036 13:08:29 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:07.298 13:08:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.298 13:08:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.298 13:08:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.298 13:08:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.298 13:08:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.298 13:08:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.298 13:08:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.298 13:08:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.298 13:08:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.298 13:08:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.298 13:08:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.298 13:08:29 env -- scripts/common.sh@344 -- # case "$op" in 00:04:07.298 13:08:29 env -- scripts/common.sh@345 -- # : 1 00:04:07.298 13:08:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.298 13:08:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.298 13:08:29 env -- scripts/common.sh@365 -- # decimal 1 00:04:07.298 13:08:29 env -- scripts/common.sh@353 -- # local d=1 00:04:07.298 13:08:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.298 13:08:29 env -- scripts/common.sh@355 -- # echo 1 00:04:07.298 13:08:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.298 13:08:29 env -- scripts/common.sh@366 -- # decimal 2 00:04:07.298 13:08:29 env -- scripts/common.sh@353 -- # local d=2 00:04:07.298 13:08:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.298 13:08:29 env -- scripts/common.sh@355 -- # echo 2 00:04:07.298 13:08:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.298 13:08:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.298 13:08:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.298 13:08:29 env -- scripts/common.sh@368 -- # return 0 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.298 --rc genhtml_branch_coverage=1 00:04:07.298 --rc genhtml_function_coverage=1 00:04:07.298 --rc genhtml_legend=1 00:04:07.298 --rc geninfo_all_blocks=1 00:04:07.298 --rc geninfo_unexecuted_blocks=1 00:04:07.298 00:04:07.298 ' 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.298 --rc genhtml_branch_coverage=1 00:04:07.298 --rc genhtml_function_coverage=1 00:04:07.298 --rc genhtml_legend=1 00:04:07.298 --rc geninfo_all_blocks=1 00:04:07.298 --rc geninfo_unexecuted_blocks=1 00:04:07.298 00:04:07.298 ' 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.298 --rc genhtml_branch_coverage=1 00:04:07.298 --rc genhtml_function_coverage=1 00:04:07.298 --rc genhtml_legend=1 00:04:07.298 --rc geninfo_all_blocks=1 00:04:07.298 --rc geninfo_unexecuted_blocks=1 00:04:07.298 00:04:07.298 ' 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:07.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.298 --rc genhtml_branch_coverage=1 00:04:07.298 --rc genhtml_function_coverage=1 00:04:07.298 --rc genhtml_legend=1 00:04:07.298 --rc geninfo_all_blocks=1 00:04:07.298 --rc geninfo_unexecuted_blocks=1 00:04:07.298 00:04:07.298 ' 00:04:07.298 13:08:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.298 13:08:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.298 13:08:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.298 ************************************ 00:04:07.298 START TEST env_memory 00:04:07.298 ************************************ 00:04:07.298 13:08:29 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:07.298 00:04:07.298 00:04:07.298 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.298 http://cunit.sourceforge.net/ 00:04:07.298 00:04:07.298 00:04:07.298 Suite: memory 00:04:07.298 Test: alloc and free memory map ...[2024-12-05 13:08:29.755998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:07.298 passed 00:04:07.298 Test: mem map translation ...[2024-12-05 13:08:29.781318] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:07.298 [2024-12-05 13:08:29.781336] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:07.298 [2024-12-05 13:08:29.781381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:07.298 [2024-12-05 13:08:29.781388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:07.298 passed 00:04:07.298 Test: mem map registration ...[2024-12-05 13:08:29.836438] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:07.298 [2024-12-05 13:08:29.836451] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:07.298 passed 00:04:07.560 Test: mem map adjacent registrations ...passed 00:04:07.560 00:04:07.560 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.560 suites 1 1 n/a 0 0 00:04:07.560 tests 4 4 4 0 0 00:04:07.560 asserts 152 152 152 0 n/a 00:04:07.560 00:04:07.560 Elapsed time = 0.190 seconds 00:04:07.560 00:04:07.560 real 0m0.204s 00:04:07.560 user 0m0.194s 00:04:07.560 sys 0m0.010s 00:04:07.560 13:08:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.560 13:08:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:07.560 ************************************ 00:04:07.560 END TEST env_memory 00:04:07.560 ************************************ 00:04:07.560 13:08:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.560 13:08:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.560 13:08:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.560 13:08:29 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.560 ************************************ 00:04:07.560 START TEST env_vtophys 00:04:07.560 ************************************ 00:04:07.561 13:08:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:07.561 EAL: lib.eal log level changed from notice to debug 00:04:07.561 EAL: Detected lcore 0 as core 0 on socket 0 00:04:07.561 EAL: Detected lcore 1 as core 1 on socket 0 00:04:07.561 EAL: Detected lcore 2 as core 2 on socket 0 00:04:07.561 EAL: Detected lcore 3 as core 3 on socket 0 00:04:07.561 EAL: Detected lcore 4 as core 4 on socket 0 00:04:07.561 EAL: Detected lcore 5 as core 5 on socket 0 00:04:07.561 EAL: Detected lcore 6 as core 6 on socket 0 00:04:07.561 EAL: Detected lcore 7 as core 7 on socket 0 00:04:07.561 EAL: Detected lcore 8 as core 8 on socket 0 00:04:07.561 EAL: Detected lcore 9 as core 9 on socket 0 00:04:07.561 EAL: Detected lcore 10 as core 10 on socket 0 00:04:07.561 EAL: Detected lcore 11 as core 11 on socket 0 00:04:07.561 EAL: Detected lcore 12 as core 12 on socket 0 00:04:07.561 EAL: Detected lcore 13 as core 13 on socket 0 00:04:07.561 EAL: Detected lcore 14 as core 14 on socket 0 00:04:07.561 EAL: Detected lcore 15 as core 15 on socket 0 00:04:07.561 EAL: Detected lcore 16 as core 16 on socket 0 00:04:07.561 EAL: Detected lcore 17 as core 17 on socket 0 00:04:07.561 EAL: Detected lcore 18 as core 18 on socket 0 00:04:07.561 EAL: Detected lcore 19 as core 19 on socket 0 00:04:07.561 EAL: Detected lcore 20 as core 20 on socket 0 00:04:07.561 EAL: Detected lcore 21 as core 21 on socket 0 00:04:07.561 EAL: Detected lcore 22 as core 22 on socket 0 00:04:07.561 EAL: Detected lcore 23 as core 23 on socket 0 00:04:07.561 EAL: Detected lcore 24 as core 24 on socket 0 00:04:07.561 EAL: Detected lcore 25 as core 25 on socket 0 00:04:07.561 EAL: Detected lcore 26 as core 26 on socket 0 00:04:07.561 EAL: Detected lcore 27 as core 27 on socket 0 00:04:07.561 EAL: Detected lcore 28 as core 28 on socket 0 00:04:07.561 EAL: Detected lcore 29 as core 29 on socket 0 00:04:07.561 EAL: Detected lcore 30 as core 30 on socket 0 00:04:07.561 EAL: Detected lcore 31 as core 31 on socket 0 00:04:07.561 EAL: Detected lcore 32 as core 32 on socket 0 00:04:07.561 EAL: Detected lcore 33 as core 33 on socket 0 00:04:07.561 EAL: Detected lcore 34 as core 34 on socket 0 00:04:07.561 EAL: Detected lcore 35 as core 35 on socket 0 00:04:07.561 EAL: Detected lcore 36 as core 0 on socket 1 00:04:07.561 EAL: Detected lcore 37 as core 1 on socket 1 00:04:07.561 EAL: Detected lcore 38 as core 2 on socket 1 00:04:07.561 EAL: Detected lcore 39 as core 3 on socket 1 00:04:07.561 EAL: Detected lcore 40 as core 4 on socket 1 00:04:07.561 EAL: Detected lcore 41 as core 5 on socket 1 00:04:07.561 EAL: Detected lcore 42 as core 6 on socket 1 00:04:07.561 EAL: Detected lcore 43 as core 7 on socket 1 00:04:07.561 EAL: Detected lcore 44 as core 8 on socket 1 00:04:07.561 EAL: Detected lcore 45 as core 9 on socket 1 00:04:07.561 EAL: Detected lcore 46 as core 10 on socket 1 00:04:07.561 EAL: Detected lcore 47 as core 11 on socket 1 00:04:07.561 EAL: Detected lcore 48 as core 12 on socket 1 00:04:07.561 EAL: Detected lcore 49 as core 13 on socket 1 00:04:07.561 EAL: Detected lcore 50 as core 14 on socket 1 00:04:07.561 EAL: Detected lcore 51 as core 15 on socket 1 00:04:07.561 EAL: Detected lcore 52 as core 16 on socket 1 00:04:07.561 EAL: Detected lcore 53 as core 17 on socket 1 00:04:07.561 EAL: Detected lcore 54 as core 18 on socket 1 00:04:07.561 EAL: Detected lcore 55 as core 19 on socket 1 00:04:07.561 EAL: Detected lcore 56 as core 20 on socket 1 00:04:07.561 EAL: Detected lcore 57 as core 21 on socket 1 00:04:07.561 EAL: Detected lcore 58 as core 22 on socket 1 00:04:07.561 EAL: Detected lcore 59 as core 23 on socket 1 00:04:07.561 EAL: Detected lcore 60 as core 24 on socket 1 00:04:07.561 EAL: Detected lcore 61 as core 25 on socket 1 00:04:07.561 EAL: Detected lcore 62 as core 26 on socket 1 00:04:07.561 EAL: Detected lcore 63 as core 27 on socket 1 00:04:07.561 EAL: Detected lcore 64 as core 28 on socket 1 00:04:07.561 EAL: Detected lcore 65 as core 29 on socket 1 00:04:07.561 EAL: Detected lcore 66 as core 30 on socket 1 00:04:07.561 EAL: Detected lcore 67 as core 31 on socket 1 00:04:07.561 EAL: Detected lcore 68 as core 32 on socket 1 00:04:07.561 EAL: Detected lcore 69 as core 33 on socket 1 00:04:07.561 EAL: Detected lcore 70 as core 34 on socket 1 00:04:07.561 EAL: Detected lcore 71 as core 35 on socket 1 00:04:07.561 EAL: Detected lcore 72 as core 0 on socket 0 00:04:07.561 EAL: Detected lcore 73 as core 1 on socket 0 00:04:07.561 EAL: Detected lcore 74 as core 2 on socket 0 00:04:07.561 EAL: Detected lcore 75 as core 3 on socket 0 00:04:07.561 EAL: Detected lcore 76 as core 4 on socket 0 00:04:07.561 EAL: Detected lcore 77 as core 5 on socket 0 00:04:07.561 EAL: Detected lcore 78 as core 6 on socket 0 00:04:07.561 EAL: Detected lcore 79 as core 7 on socket 0 00:04:07.561 EAL: Detected lcore 80 as core 8 on socket 0 00:04:07.561 EAL: Detected lcore 81 as core 9 on socket 0 00:04:07.561 EAL: Detected lcore 82 as core 10 on socket 0 00:04:07.561 EAL: Detected lcore 83 as core 11 on socket 0 00:04:07.561 EAL: Detected lcore 84 as core 12 on socket 0 00:04:07.561 EAL: Detected lcore 85 as core 13 on socket 0 00:04:07.561 EAL: Detected lcore 86 as core 14 on socket 0 00:04:07.561 EAL: Detected lcore 87 as core 15 on socket 0 00:04:07.561 EAL: Detected lcore 88 as core 16 on socket 0 00:04:07.561 EAL: Detected lcore 89 as core 17 on socket 0 00:04:07.561 EAL: Detected lcore 90 as core 18 on socket 0 00:04:07.561 EAL: Detected lcore 91 as core 19 on socket 0 00:04:07.561 EAL: Detected lcore 92 as core 20 on socket 0 00:04:07.561 EAL: Detected lcore 93 as core 21 on socket 0 00:04:07.561 EAL: Detected lcore 94 as core 22 on socket 0 00:04:07.561 EAL: Detected lcore 95 as core 23 on socket 0 00:04:07.561 EAL: Detected lcore 96 as core 24 on socket 0 00:04:07.561 EAL: Detected lcore 97 as core 25 on socket 0 00:04:07.561 EAL: Detected lcore 98 as core 26 on socket 0 00:04:07.561 EAL: Detected lcore 99 as core 27 on socket 0 00:04:07.561 EAL: Detected lcore 100 as core 28 on socket 0 00:04:07.561 EAL: Detected lcore 101 as core 29 on socket 0 00:04:07.561 EAL: Detected lcore 102 as core 30 on socket 0 00:04:07.561 EAL: Detected lcore 103 as core 31 on socket 0 00:04:07.561 EAL: Detected lcore 104 as core 32 on socket 0 00:04:07.561 EAL: Detected lcore 105 as core 33 on socket 0 00:04:07.561 EAL: Detected lcore 106 as core 34 on socket 0 00:04:07.561 EAL: Detected lcore 107 as core 35 on socket 0 00:04:07.561 EAL: Detected lcore 108 as core 0 on socket 1 00:04:07.561 EAL: Detected lcore 109 as core 1 on socket 1 00:04:07.561 EAL: Detected lcore 110 as core 2 on socket 1 00:04:07.561 EAL: Detected lcore 111 as core 3 on socket 1 00:04:07.561 EAL: Detected lcore 112 as core 4 on socket 1 00:04:07.561 EAL: Detected lcore 113 as core 5 on socket 1 00:04:07.561 EAL: Detected lcore 114 as core 6 on socket 1 00:04:07.561 EAL: Detected lcore 115 as core 7 on socket 1 00:04:07.561 EAL: Detected lcore 116 as core 8 on socket 1 00:04:07.561 EAL: Detected lcore 117 as core 9 on socket 1 00:04:07.561 EAL: Detected lcore 118 as core 10 on socket 1 00:04:07.561 EAL: Detected lcore 119 as core 11 on socket 1 00:04:07.561 EAL: Detected lcore 120 as core 12 on socket 1 00:04:07.561 EAL: Detected lcore 121 as core 13 on socket 1 00:04:07.561 EAL: Detected lcore 122 as core 14 on socket 1 00:04:07.561 EAL: Detected lcore 123 as core 15 on socket 1 00:04:07.561 EAL: Detected lcore 124 as core 16 on socket 1 00:04:07.561 EAL: Detected lcore 125 as core 17 on socket 1 00:04:07.561 EAL: Detected lcore 126 as core 18 on socket 1 00:04:07.561 EAL: Detected lcore 127 as core 19 on socket 1 00:04:07.561 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:07.561 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:07.561 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:07.561 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:07.561 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:07.561 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:07.561 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:07.561 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:07.561 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:07.561 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:07.561 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:07.561 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:07.561 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:07.561 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:07.561 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:07.561 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:07.561 EAL: Maximum logical cores by configuration: 128 00:04:07.561 EAL: Detected CPU lcores: 128 00:04:07.561 EAL: Detected NUMA nodes: 2 00:04:07.561 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:07.562 EAL: Detected shared linkage of DPDK 00:04:07.562 EAL: No shared files mode enabled, IPC will be disabled 00:04:07.562 EAL: Bus pci wants IOVA as 'DC' 00:04:07.562 EAL: Buses did not request a specific IOVA mode. 00:04:07.562 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:07.562 EAL: Selected IOVA mode 'VA' 00:04:07.562 EAL: Probing VFIO support... 00:04:07.562 EAL: IOMMU type 1 (Type 1) is supported 00:04:07.562 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:07.562 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:07.562 EAL: VFIO support initialized 00:04:07.562 EAL: Ask a virtual area of 0x2e000 bytes 00:04:07.562 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:07.562 EAL: Setting up physically contiguous memory... 00:04:07.562 EAL: Setting maximum number of open files to 524288 00:04:07.562 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:07.562 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:07.562 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:07.562 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:07.562 EAL: Ask a virtual area of 0x61000 bytes 00:04:07.562 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:07.562 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:07.562 EAL: Ask a virtual area of 0x400000000 bytes 00:04:07.562 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:07.562 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:07.562 EAL: Hugepages will be freed exactly as allocated. 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: TSC frequency is ~2400000 KHz 00:04:07.562 EAL: Main lcore 0 is ready (tid=7f6f8a5d5a00;cpuset=[0]) 00:04:07.562 EAL: Trying to obtain current memory policy. 00:04:07.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.562 EAL: Restoring previous memory policy: 0 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was expanded by 2MB 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:07.562 EAL: Mem event callback 'spdk:(nil)' registered 00:04:07.562 00:04:07.562 00:04:07.562 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.562 http://cunit.sourceforge.net/ 00:04:07.562 00:04:07.562 00:04:07.562 Suite: components_suite 00:04:07.562 Test: vtophys_malloc_test ...passed 00:04:07.562 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:07.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.562 EAL: Restoring previous memory policy: 4 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was expanded by 4MB 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was shrunk by 4MB 00:04:07.562 EAL: Trying to obtain current memory policy. 00:04:07.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.562 EAL: Restoring previous memory policy: 4 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was expanded by 6MB 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was shrunk by 6MB 00:04:07.562 EAL: Trying to obtain current memory policy. 00:04:07.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.562 EAL: Restoring previous memory policy: 4 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was expanded by 10MB 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was shrunk by 10MB 00:04:07.562 EAL: Trying to obtain current memory policy. 00:04:07.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.562 EAL: Restoring previous memory policy: 4 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was expanded by 18MB 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was shrunk by 18MB 00:04:07.562 EAL: Trying to obtain current memory policy. 00:04:07.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.562 EAL: Restoring previous memory policy: 4 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was expanded by 34MB 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was shrunk by 34MB 00:04:07.562 EAL: Trying to obtain current memory policy. 00:04:07.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.562 EAL: Restoring previous memory policy: 4 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.562 EAL: request: mp_malloc_sync 00:04:07.562 EAL: No shared files mode enabled, IPC is disabled 00:04:07.562 EAL: Heap on socket 0 was expanded by 66MB 00:04:07.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.824 EAL: request: mp_malloc_sync 00:04:07.824 EAL: No shared files mode enabled, IPC is disabled 00:04:07.824 EAL: Heap on socket 0 was shrunk by 66MB 00:04:07.824 EAL: Trying to obtain current memory policy. 00:04:07.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.824 EAL: Restoring previous memory policy: 4 00:04:07.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.824 EAL: request: mp_malloc_sync 00:04:07.824 EAL: No shared files mode enabled, IPC is disabled 00:04:07.824 EAL: Heap on socket 0 was expanded by 130MB 00:04:07.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.824 EAL: request: mp_malloc_sync 00:04:07.824 EAL: No shared files mode enabled, IPC is disabled 00:04:07.824 EAL: Heap on socket 0 was shrunk by 130MB 00:04:07.824 EAL: Trying to obtain current memory policy. 00:04:07.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.824 EAL: Restoring previous memory policy: 4 00:04:07.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.824 EAL: request: mp_malloc_sync 00:04:07.824 EAL: No shared files mode enabled, IPC is disabled 00:04:07.824 EAL: Heap on socket 0 was expanded by 258MB 00:04:07.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.824 EAL: request: mp_malloc_sync 00:04:07.824 EAL: No shared files mode enabled, IPC is disabled 00:04:07.824 EAL: Heap on socket 0 was shrunk by 258MB 00:04:07.824 EAL: Trying to obtain current memory policy. 00:04:07.824 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.824 EAL: Restoring previous memory policy: 4 00:04:07.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.824 EAL: request: mp_malloc_sync 00:04:07.824 EAL: No shared files mode enabled, IPC is disabled 00:04:07.824 EAL: Heap on socket 0 was expanded by 514MB 00:04:07.824 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.085 EAL: request: mp_malloc_sync 00:04:08.085 EAL: No shared files mode enabled, IPC is disabled 00:04:08.085 EAL: Heap on socket 0 was shrunk by 514MB 00:04:08.085 EAL: Trying to obtain current memory policy. 00:04:08.085 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.085 EAL: Restoring previous memory policy: 4 00:04:08.085 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.085 EAL: request: mp_malloc_sync 00:04:08.085 EAL: No shared files mode enabled, IPC is disabled 00:04:08.085 EAL: Heap on socket 0 was expanded by 1026MB 00:04:08.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.346 EAL: request: mp_malloc_sync 00:04:08.346 EAL: No shared files mode enabled, IPC is disabled 00:04:08.346 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:08.346 passed 00:04:08.346 00:04:08.346 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.346 suites 1 1 n/a 0 0 00:04:08.346 tests 2 2 2 0 0 00:04:08.346 asserts 497 497 497 0 n/a 00:04:08.346 00:04:08.346 Elapsed time = 0.649 seconds 00:04:08.346 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.346 EAL: request: mp_malloc_sync 00:04:08.346 EAL: No shared files mode enabled, IPC is disabled 00:04:08.346 EAL: Heap on socket 0 was shrunk by 2MB 00:04:08.346 EAL: No shared files mode enabled, IPC is disabled 00:04:08.346 EAL: No shared files mode enabled, IPC is disabled 00:04:08.346 EAL: No shared files mode enabled, IPC is disabled 00:04:08.346 00:04:08.346 real 0m0.797s 00:04:08.346 user 0m0.423s 00:04:08.346 sys 0m0.341s 00:04:08.346 13:08:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.346 13:08:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:08.346 ************************************ 00:04:08.346 END TEST env_vtophys 00:04:08.346 ************************************ 00:04:08.346 13:08:30 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.346 13:08:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.346 13:08:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.346 13:08:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.346 ************************************ 00:04:08.346 START TEST env_pci 00:04:08.346 ************************************ 00:04:08.346 13:08:30 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:08.346 00:04:08.346 00:04:08.346 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.347 http://cunit.sourceforge.net/ 00:04:08.347 00:04:08.347 00:04:08.347 Suite: pci 00:04:08.347 Test: pci_hook ...[2024-12-05 13:08:30.881770] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 672283 has claimed it 00:04:08.607 EAL: Cannot find device (10000:00:01.0) 00:04:08.607 EAL: Failed to attach device on primary process 00:04:08.607 passed 00:04:08.607 00:04:08.607 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.607 suites 1 1 n/a 0 0 00:04:08.607 tests 1 1 1 0 0 00:04:08.607 asserts 25 25 25 0 n/a 00:04:08.607 00:04:08.607 Elapsed time = 0.034 seconds 00:04:08.607 00:04:08.607 real 0m0.056s 00:04:08.607 user 0m0.012s 00:04:08.607 sys 0m0.044s 00:04:08.607 13:08:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.607 13:08:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:08.607 ************************************ 00:04:08.607 END TEST env_pci 00:04:08.607 ************************************ 00:04:08.607 13:08:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:08.607 13:08:30 env -- env/env.sh@15 -- # uname 00:04:08.607 13:08:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:08.607 13:08:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:08.607 13:08:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.607 13:08:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:08.607 13:08:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.607 13:08:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.607 ************************************ 00:04:08.607 START TEST env_dpdk_post_init 00:04:08.607 ************************************ 00:04:08.607 13:08:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:08.607 EAL: Detected CPU lcores: 128 00:04:08.607 EAL: Detected NUMA nodes: 2 00:04:08.607 EAL: Detected shared linkage of DPDK 00:04:08.607 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:08.607 EAL: Selected IOVA mode 'VA' 00:04:08.607 EAL: VFIO support initialized 00:04:08.607 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:08.607 EAL: Using IOMMU type 1 (Type 1) 00:04:08.867 EAL: Ignore mapping IO port bar(1) 00:04:08.867 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:09.156 EAL: Ignore mapping IO port bar(1) 00:04:09.156 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:09.541 EAL: Ignore mapping IO port bar(1) 00:04:09.541 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:09.541 EAL: Ignore mapping IO port bar(1) 00:04:09.541 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:09.828 EAL: Ignore mapping IO port bar(1) 00:04:09.828 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:09.828 EAL: Ignore mapping IO port bar(1) 00:04:10.088 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:10.088 EAL: Ignore mapping IO port bar(1) 00:04:10.348 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:10.348 EAL: Ignore mapping IO port bar(1) 00:04:10.609 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:10.609 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:10.869 EAL: Ignore mapping IO port bar(1) 00:04:10.869 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:11.131 EAL: Ignore mapping IO port bar(1) 00:04:11.131 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:11.393 EAL: Ignore mapping IO port bar(1) 00:04:11.393 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:11.393 EAL: Ignore mapping IO port bar(1) 00:04:11.653 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:11.653 EAL: Ignore mapping IO port bar(1) 00:04:11.914 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:11.914 EAL: Ignore mapping IO port bar(1) 00:04:12.174 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:12.174 EAL: Ignore mapping IO port bar(1) 00:04:12.174 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:12.435 EAL: Ignore mapping IO port bar(1) 00:04:12.435 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:12.435 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:12.435 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:12.700 Starting DPDK initialization... 00:04:12.701 Starting SPDK post initialization... 00:04:12.701 SPDK NVMe probe 00:04:12.701 Attaching to 0000:65:00.0 00:04:12.701 Attached to 0000:65:00.0 00:04:12.701 Cleaning up... 00:04:14.628 00:04:14.628 real 0m5.742s 00:04:14.628 user 0m0.113s 00:04:14.628 sys 0m0.179s 00:04:14.628 13:08:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.628 13:08:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.628 ************************************ 00:04:14.628 END TEST env_dpdk_post_init 00:04:14.628 ************************************ 00:04:14.628 13:08:36 env -- env/env.sh@26 -- # uname 00:04:14.628 13:08:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.628 13:08:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.628 13:08:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.628 13:08:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.628 13:08:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.628 ************************************ 00:04:14.628 START TEST env_mem_callbacks 00:04:14.628 ************************************ 00:04:14.628 13:08:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.628 EAL: Detected CPU lcores: 128 00:04:14.628 EAL: Detected NUMA nodes: 2 00:04:14.628 EAL: Detected shared linkage of DPDK 00:04:14.628 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.628 EAL: Selected IOVA mode 'VA' 00:04:14.628 EAL: VFIO support initialized 00:04:14.628 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.628 00:04:14.628 00:04:14.628 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.628 http://cunit.sourceforge.net/ 00:04:14.628 00:04:14.628 00:04:14.628 Suite: memory 00:04:14.628 Test: test ... 00:04:14.628 register 0x200000200000 2097152 00:04:14.628 malloc 3145728 00:04:14.628 register 0x200000400000 4194304 00:04:14.628 buf 0x200000500000 len 3145728 PASSED 00:04:14.628 malloc 64 00:04:14.628 buf 0x2000004fff40 len 64 PASSED 00:04:14.628 malloc 4194304 00:04:14.628 register 0x200000800000 6291456 00:04:14.628 buf 0x200000a00000 len 4194304 PASSED 00:04:14.628 free 0x200000500000 3145728 00:04:14.628 free 0x2000004fff40 64 00:04:14.628 unregister 0x200000400000 4194304 PASSED 00:04:14.628 free 0x200000a00000 4194304 00:04:14.628 unregister 0x200000800000 6291456 PASSED 00:04:14.628 malloc 8388608 00:04:14.628 register 0x200000400000 10485760 00:04:14.628 buf 0x200000600000 len 8388608 PASSED 00:04:14.628 free 0x200000600000 8388608 00:04:14.628 unregister 0x200000400000 10485760 PASSED 00:04:14.628 passed 00:04:14.628 00:04:14.628 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.628 suites 1 1 n/a 0 0 00:04:14.628 tests 1 1 1 0 0 00:04:14.628 asserts 15 15 15 0 n/a 00:04:14.628 00:04:14.628 Elapsed time = 0.008 seconds 00:04:14.628 00:04:14.628 real 0m0.072s 00:04:14.628 user 0m0.022s 00:04:14.628 sys 0m0.050s 00:04:14.628 13:08:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.628 13:08:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.628 ************************************ 00:04:14.628 END TEST env_mem_callbacks 00:04:14.628 ************************************ 00:04:14.628 00:04:14.628 real 0m7.478s 00:04:14.628 user 0m1.034s 00:04:14.628 sys 0m0.994s 00:04:14.628 13:08:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.628 13:08:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.628 ************************************ 00:04:14.628 END TEST env 00:04:14.628 ************************************ 00:04:14.628 13:08:36 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.628 13:08:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.628 13:08:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.628 13:08:36 -- common/autotest_common.sh@10 -- # set +x 00:04:14.628 ************************************ 00:04:14.628 START TEST rpc 00:04:14.628 ************************************ 00:04:14.628 13:08:37 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.628 * Looking for test storage... 00:04:14.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:14.628 13:08:37 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:14.628 13:08:37 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:14.628 13:08:37 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:14.889 13:08:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:14.889 13:08:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:14.889 13:08:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:14.889 13:08:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:14.889 13:08:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:14.889 13:08:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:14.889 13:08:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:14.889 13:08:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:14.889 13:08:37 rpc -- scripts/common.sh@345 -- # : 1 00:04:14.889 13:08:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:14.889 13:08:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:14.889 13:08:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:14.889 13:08:37 rpc -- scripts/common.sh@353 -- # local d=1 00:04:14.889 13:08:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:14.889 13:08:37 rpc -- scripts/common.sh@355 -- # echo 1 00:04:14.889 13:08:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:14.889 13:08:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@353 -- # local d=2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:14.889 13:08:37 rpc -- scripts/common.sh@355 -- # echo 2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:14.889 13:08:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:14.889 13:08:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:14.889 13:08:37 rpc -- scripts/common.sh@368 -- # return 0 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:14.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.889 --rc genhtml_branch_coverage=1 00:04:14.889 --rc genhtml_function_coverage=1 00:04:14.889 --rc genhtml_legend=1 00:04:14.889 --rc geninfo_all_blocks=1 00:04:14.889 --rc geninfo_unexecuted_blocks=1 00:04:14.889 00:04:14.889 ' 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:14.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.889 --rc genhtml_branch_coverage=1 00:04:14.889 --rc genhtml_function_coverage=1 00:04:14.889 --rc genhtml_legend=1 00:04:14.889 --rc geninfo_all_blocks=1 00:04:14.889 --rc geninfo_unexecuted_blocks=1 00:04:14.889 00:04:14.889 ' 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:14.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.889 --rc genhtml_branch_coverage=1 00:04:14.889 --rc genhtml_function_coverage=1 00:04:14.889 --rc genhtml_legend=1 00:04:14.889 --rc geninfo_all_blocks=1 00:04:14.889 --rc geninfo_unexecuted_blocks=1 00:04:14.889 00:04:14.889 ' 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:14.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:14.889 --rc genhtml_branch_coverage=1 00:04:14.889 --rc genhtml_function_coverage=1 00:04:14.889 --rc genhtml_legend=1 00:04:14.889 --rc geninfo_all_blocks=1 00:04:14.889 --rc geninfo_unexecuted_blocks=1 00:04:14.889 00:04:14.889 ' 00:04:14.889 13:08:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=673645 00:04:14.889 13:08:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.889 13:08:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 673645 00:04:14.889 13:08:37 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@835 -- # '[' -z 673645 ']' 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.889 13:08:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.889 [2024-12-05 13:08:37.293984] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:14.889 [2024-12-05 13:08:37.294055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid673645 ] 00:04:14.889 [2024-12-05 13:08:37.377389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.889 [2024-12-05 13:08:37.419566] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:14.889 [2024-12-05 13:08:37.419599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 673645' to capture a snapshot of events at runtime. 00:04:14.889 [2024-12-05 13:08:37.419607] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:14.889 [2024-12-05 13:08:37.419613] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:14.889 [2024-12-05 13:08:37.419620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid673645 for offline analysis/debug. 00:04:14.889 [2024-12-05 13:08:37.420237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.831 13:08:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.831 13:08:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:15.831 13:08:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.831 13:08:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.831 13:08:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.831 13:08:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.831 13:08:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.831 13:08:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.831 13:08:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.831 ************************************ 00:04:15.831 START TEST rpc_integrity 00:04:15.831 ************************************ 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.831 { 00:04:15.831 "name": "Malloc0", 00:04:15.831 "aliases": [ 00:04:15.831 "b43fa903-f38e-4819-af5d-87f9e46c9cd6" 00:04:15.831 ], 00:04:15.831 "product_name": "Malloc disk", 00:04:15.831 "block_size": 512, 00:04:15.831 "num_blocks": 16384, 00:04:15.831 "uuid": "b43fa903-f38e-4819-af5d-87f9e46c9cd6", 00:04:15.831 "assigned_rate_limits": { 00:04:15.831 "rw_ios_per_sec": 0, 00:04:15.831 "rw_mbytes_per_sec": 0, 00:04:15.831 "r_mbytes_per_sec": 0, 00:04:15.831 "w_mbytes_per_sec": 0 00:04:15.831 }, 00:04:15.831 "claimed": false, 00:04:15.831 "zoned": false, 00:04:15.831 "supported_io_types": { 00:04:15.831 "read": true, 00:04:15.831 "write": true, 00:04:15.831 "unmap": true, 00:04:15.831 "flush": true, 00:04:15.831 "reset": true, 00:04:15.831 "nvme_admin": false, 00:04:15.831 "nvme_io": false, 00:04:15.831 "nvme_io_md": false, 00:04:15.831 "write_zeroes": true, 00:04:15.831 "zcopy": true, 00:04:15.831 "get_zone_info": false, 00:04:15.831 "zone_management": false, 00:04:15.831 "zone_append": false, 00:04:15.831 "compare": false, 00:04:15.831 "compare_and_write": false, 00:04:15.831 "abort": true, 00:04:15.831 "seek_hole": false, 00:04:15.831 "seek_data": false, 00:04:15.831 "copy": true, 00:04:15.831 "nvme_iov_md": false 00:04:15.831 }, 00:04:15.831 "memory_domains": [ 00:04:15.831 { 00:04:15.831 "dma_device_id": "system", 00:04:15.831 "dma_device_type": 1 00:04:15.831 }, 00:04:15.831 { 00:04:15.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.831 "dma_device_type": 2 00:04:15.831 } 00:04:15.831 ], 00:04:15.831 "driver_specific": {} 00:04:15.831 } 00:04:15.831 ]' 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.831 [2024-12-05 13:08:38.256974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.831 [2024-12-05 13:08:38.257006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.831 [2024-12-05 13:08:38.257022] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11d68c0 00:04:15.831 [2024-12-05 13:08:38.257029] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.831 [2024-12-05 13:08:38.258401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.831 [2024-12-05 13:08:38.258422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.831 Passthru0 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.831 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.831 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.831 { 00:04:15.831 "name": "Malloc0", 00:04:15.831 "aliases": [ 00:04:15.831 "b43fa903-f38e-4819-af5d-87f9e46c9cd6" 00:04:15.831 ], 00:04:15.831 "product_name": "Malloc disk", 00:04:15.831 "block_size": 512, 00:04:15.831 "num_blocks": 16384, 00:04:15.831 "uuid": "b43fa903-f38e-4819-af5d-87f9e46c9cd6", 00:04:15.831 "assigned_rate_limits": { 00:04:15.831 "rw_ios_per_sec": 0, 00:04:15.831 "rw_mbytes_per_sec": 0, 00:04:15.831 "r_mbytes_per_sec": 0, 00:04:15.831 "w_mbytes_per_sec": 0 00:04:15.831 }, 00:04:15.831 "claimed": true, 00:04:15.831 "claim_type": "exclusive_write", 00:04:15.831 "zoned": false, 00:04:15.831 "supported_io_types": { 00:04:15.831 "read": true, 00:04:15.831 "write": true, 00:04:15.831 "unmap": true, 00:04:15.831 "flush": true, 00:04:15.831 "reset": true, 00:04:15.831 "nvme_admin": false, 00:04:15.831 "nvme_io": false, 00:04:15.831 "nvme_io_md": false, 00:04:15.831 "write_zeroes": true, 00:04:15.831 "zcopy": true, 00:04:15.831 "get_zone_info": false, 00:04:15.831 "zone_management": false, 00:04:15.831 "zone_append": false, 00:04:15.831 "compare": false, 00:04:15.831 "compare_and_write": false, 00:04:15.831 "abort": true, 00:04:15.831 "seek_hole": false, 00:04:15.831 "seek_data": false, 00:04:15.831 "copy": true, 00:04:15.831 "nvme_iov_md": false 00:04:15.831 }, 00:04:15.831 "memory_domains": [ 00:04:15.831 { 00:04:15.831 "dma_device_id": "system", 00:04:15.831 "dma_device_type": 1 00:04:15.831 }, 00:04:15.831 { 00:04:15.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.831 "dma_device_type": 2 00:04:15.831 } 00:04:15.831 ], 00:04:15.831 "driver_specific": {} 00:04:15.831 }, 00:04:15.831 { 00:04:15.831 "name": "Passthru0", 00:04:15.831 "aliases": [ 00:04:15.831 "0585d52a-e21c-5310-a976-bc398b7760b8" 00:04:15.831 ], 00:04:15.832 "product_name": "passthru", 00:04:15.832 "block_size": 512, 00:04:15.832 "num_blocks": 16384, 00:04:15.832 "uuid": "0585d52a-e21c-5310-a976-bc398b7760b8", 00:04:15.832 "assigned_rate_limits": { 00:04:15.832 "rw_ios_per_sec": 0, 00:04:15.832 "rw_mbytes_per_sec": 0, 00:04:15.832 "r_mbytes_per_sec": 0, 00:04:15.832 "w_mbytes_per_sec": 0 00:04:15.832 }, 00:04:15.832 "claimed": false, 00:04:15.832 "zoned": false, 00:04:15.832 "supported_io_types": { 00:04:15.832 "read": true, 00:04:15.832 "write": true, 00:04:15.832 "unmap": true, 00:04:15.832 "flush": true, 00:04:15.832 "reset": true, 00:04:15.832 "nvme_admin": false, 00:04:15.832 "nvme_io": false, 00:04:15.832 "nvme_io_md": false, 00:04:15.832 "write_zeroes": true, 00:04:15.832 "zcopy": true, 00:04:15.832 "get_zone_info": false, 00:04:15.832 "zone_management": false, 00:04:15.832 "zone_append": false, 00:04:15.832 "compare": false, 00:04:15.832 "compare_and_write": false, 00:04:15.832 "abort": true, 00:04:15.832 "seek_hole": false, 00:04:15.832 "seek_data": false, 00:04:15.832 "copy": true, 00:04:15.832 "nvme_iov_md": false 00:04:15.832 }, 00:04:15.832 "memory_domains": [ 00:04:15.832 { 00:04:15.832 "dma_device_id": "system", 00:04:15.832 "dma_device_type": 1 00:04:15.832 }, 00:04:15.832 { 00:04:15.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.832 "dma_device_type": 2 00:04:15.832 } 00:04:15.832 ], 00:04:15.832 "driver_specific": { 00:04:15.832 "passthru": { 00:04:15.832 "name": "Passthru0", 00:04:15.832 "base_bdev_name": "Malloc0" 00:04:15.832 } 00:04:15.832 } 00:04:15.832 } 00:04:15.832 ]' 00:04:15.832 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.832 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.832 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.832 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.832 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.832 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.832 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.832 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.092 13:08:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.093 00:04:16.093 real 0m0.293s 00:04:16.093 user 0m0.190s 00:04:16.093 sys 0m0.042s 00:04:16.093 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.093 13:08:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.093 ************************************ 00:04:16.093 END TEST rpc_integrity 00:04:16.093 ************************************ 00:04:16.093 13:08:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.093 13:08:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.093 13:08:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.093 13:08:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.093 ************************************ 00:04:16.093 START TEST rpc_plugins 00:04:16.093 ************************************ 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.093 { 00:04:16.093 "name": "Malloc1", 00:04:16.093 "aliases": [ 00:04:16.093 "8a2df57c-730c-410d-b00c-e0f8f83eecb1" 00:04:16.093 ], 00:04:16.093 "product_name": "Malloc disk", 00:04:16.093 "block_size": 4096, 00:04:16.093 "num_blocks": 256, 00:04:16.093 "uuid": "8a2df57c-730c-410d-b00c-e0f8f83eecb1", 00:04:16.093 "assigned_rate_limits": { 00:04:16.093 "rw_ios_per_sec": 0, 00:04:16.093 "rw_mbytes_per_sec": 0, 00:04:16.093 "r_mbytes_per_sec": 0, 00:04:16.093 "w_mbytes_per_sec": 0 00:04:16.093 }, 00:04:16.093 "claimed": false, 00:04:16.093 "zoned": false, 00:04:16.093 "supported_io_types": { 00:04:16.093 "read": true, 00:04:16.093 "write": true, 00:04:16.093 "unmap": true, 00:04:16.093 "flush": true, 00:04:16.093 "reset": true, 00:04:16.093 "nvme_admin": false, 00:04:16.093 "nvme_io": false, 00:04:16.093 "nvme_io_md": false, 00:04:16.093 "write_zeroes": true, 00:04:16.093 "zcopy": true, 00:04:16.093 "get_zone_info": false, 00:04:16.093 "zone_management": false, 00:04:16.093 "zone_append": false, 00:04:16.093 "compare": false, 00:04:16.093 "compare_and_write": false, 00:04:16.093 "abort": true, 00:04:16.093 "seek_hole": false, 00:04:16.093 "seek_data": false, 00:04:16.093 "copy": true, 00:04:16.093 "nvme_iov_md": false 00:04:16.093 }, 00:04:16.093 "memory_domains": [ 00:04:16.093 { 00:04:16.093 "dma_device_id": "system", 00:04:16.093 "dma_device_type": 1 00:04:16.093 }, 00:04:16.093 { 00:04:16.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.093 "dma_device_type": 2 00:04:16.093 } 00:04:16.093 ], 00:04:16.093 "driver_specific": {} 00:04:16.093 } 00:04:16.093 ]' 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.093 13:08:38 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.093 00:04:16.093 real 0m0.158s 00:04:16.093 user 0m0.099s 00:04:16.093 sys 0m0.019s 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.093 13:08:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.093 ************************************ 00:04:16.093 END TEST rpc_plugins 00:04:16.093 ************************************ 00:04:16.355 13:08:38 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.355 13:08:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.355 13:08:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.355 13:08:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.355 ************************************ 00:04:16.355 START TEST rpc_trace_cmd_test 00:04:16.355 ************************************ 00:04:16.355 13:08:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:16.355 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.355 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.355 13:08:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.355 13:08:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.355 13:08:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.355 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.355 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid673645", 00:04:16.355 "tpoint_group_mask": "0x8", 00:04:16.355 "iscsi_conn": { 00:04:16.355 "mask": "0x2", 00:04:16.355 "tpoint_mask": "0x0" 00:04:16.355 }, 00:04:16.355 "scsi": { 00:04:16.355 "mask": "0x4", 00:04:16.355 "tpoint_mask": "0x0" 00:04:16.355 }, 00:04:16.355 "bdev": { 00:04:16.355 "mask": "0x8", 00:04:16.355 "tpoint_mask": "0xffffffffffffffff" 00:04:16.355 }, 00:04:16.355 "nvmf_rdma": { 00:04:16.355 "mask": "0x10", 00:04:16.355 "tpoint_mask": "0x0" 00:04:16.355 }, 00:04:16.356 "nvmf_tcp": { 00:04:16.356 "mask": "0x20", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "ftl": { 00:04:16.356 "mask": "0x40", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "blobfs": { 00:04:16.356 "mask": "0x80", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "dsa": { 00:04:16.356 "mask": "0x200", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "thread": { 00:04:16.356 "mask": "0x400", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "nvme_pcie": { 00:04:16.356 "mask": "0x800", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "iaa": { 00:04:16.356 "mask": "0x1000", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "nvme_tcp": { 00:04:16.356 "mask": "0x2000", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "bdev_nvme": { 00:04:16.356 "mask": "0x4000", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "sock": { 00:04:16.356 "mask": "0x8000", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "blob": { 00:04:16.356 "mask": "0x10000", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "bdev_raid": { 00:04:16.356 "mask": "0x20000", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 }, 00:04:16.356 "scheduler": { 00:04:16.356 "mask": "0x40000", 00:04:16.356 "tpoint_mask": "0x0" 00:04:16.356 } 00:04:16.356 }' 00:04:16.356 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.356 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.356 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.356 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.356 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.356 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.356 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.617 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.617 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.617 13:08:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.617 00:04:16.617 real 0m0.251s 00:04:16.617 user 0m0.211s 00:04:16.617 sys 0m0.029s 00:04:16.617 13:08:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.617 13:08:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.617 ************************************ 00:04:16.617 END TEST rpc_trace_cmd_test 00:04:16.617 ************************************ 00:04:16.617 13:08:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.617 13:08:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.617 13:08:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.617 13:08:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.617 13:08:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.617 13:08:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.617 ************************************ 00:04:16.617 START TEST rpc_daemon_integrity 00:04:16.617 ************************************ 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.617 { 00:04:16.617 "name": "Malloc2", 00:04:16.617 "aliases": [ 00:04:16.617 "00819502-e9b7-4d31-b262-a35a95eccfe6" 00:04:16.617 ], 00:04:16.617 "product_name": "Malloc disk", 00:04:16.617 "block_size": 512, 00:04:16.617 "num_blocks": 16384, 00:04:16.617 "uuid": "00819502-e9b7-4d31-b262-a35a95eccfe6", 00:04:16.617 "assigned_rate_limits": { 00:04:16.617 "rw_ios_per_sec": 0, 00:04:16.617 "rw_mbytes_per_sec": 0, 00:04:16.617 "r_mbytes_per_sec": 0, 00:04:16.617 "w_mbytes_per_sec": 0 00:04:16.617 }, 00:04:16.617 "claimed": false, 00:04:16.617 "zoned": false, 00:04:16.617 "supported_io_types": { 00:04:16.617 "read": true, 00:04:16.617 "write": true, 00:04:16.617 "unmap": true, 00:04:16.617 "flush": true, 00:04:16.617 "reset": true, 00:04:16.617 "nvme_admin": false, 00:04:16.617 "nvme_io": false, 00:04:16.617 "nvme_io_md": false, 00:04:16.617 "write_zeroes": true, 00:04:16.617 "zcopy": true, 00:04:16.617 "get_zone_info": false, 00:04:16.617 "zone_management": false, 00:04:16.617 "zone_append": false, 00:04:16.617 "compare": false, 00:04:16.617 "compare_and_write": false, 00:04:16.617 "abort": true, 00:04:16.617 "seek_hole": false, 00:04:16.617 "seek_data": false, 00:04:16.617 "copy": true, 00:04:16.617 "nvme_iov_md": false 00:04:16.617 }, 00:04:16.617 "memory_domains": [ 00:04:16.617 { 00:04:16.617 "dma_device_id": "system", 00:04:16.617 "dma_device_type": 1 00:04:16.617 }, 00:04:16.617 { 00:04:16.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.617 "dma_device_type": 2 00:04:16.617 } 00:04:16.617 ], 00:04:16.617 "driver_specific": {} 00:04:16.617 } 00:04:16.617 ]' 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.617 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.878 [2024-12-05 13:08:39.187481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.878 [2024-12-05 13:08:39.187508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.878 [2024-12-05 13:08:39.187520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11d63c0 00:04:16.878 [2024-12-05 13:08:39.187527] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.878 [2024-12-05 13:08:39.188791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.878 [2024-12-05 13:08:39.188811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.878 Passthru0 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.878 { 00:04:16.878 "name": "Malloc2", 00:04:16.878 "aliases": [ 00:04:16.878 "00819502-e9b7-4d31-b262-a35a95eccfe6" 00:04:16.878 ], 00:04:16.878 "product_name": "Malloc disk", 00:04:16.878 "block_size": 512, 00:04:16.878 "num_blocks": 16384, 00:04:16.878 "uuid": "00819502-e9b7-4d31-b262-a35a95eccfe6", 00:04:16.878 "assigned_rate_limits": { 00:04:16.878 "rw_ios_per_sec": 0, 00:04:16.878 "rw_mbytes_per_sec": 0, 00:04:16.878 "r_mbytes_per_sec": 0, 00:04:16.878 "w_mbytes_per_sec": 0 00:04:16.878 }, 00:04:16.878 "claimed": true, 00:04:16.878 "claim_type": "exclusive_write", 00:04:16.878 "zoned": false, 00:04:16.878 "supported_io_types": { 00:04:16.878 "read": true, 00:04:16.878 "write": true, 00:04:16.878 "unmap": true, 00:04:16.878 "flush": true, 00:04:16.878 "reset": true, 00:04:16.878 "nvme_admin": false, 00:04:16.878 "nvme_io": false, 00:04:16.878 "nvme_io_md": false, 00:04:16.878 "write_zeroes": true, 00:04:16.878 "zcopy": true, 00:04:16.878 "get_zone_info": false, 00:04:16.878 "zone_management": false, 00:04:16.878 "zone_append": false, 00:04:16.878 "compare": false, 00:04:16.878 "compare_and_write": false, 00:04:16.878 "abort": true, 00:04:16.878 "seek_hole": false, 00:04:16.878 "seek_data": false, 00:04:16.878 "copy": true, 00:04:16.878 "nvme_iov_md": false 00:04:16.878 }, 00:04:16.878 "memory_domains": [ 00:04:16.878 { 00:04:16.878 "dma_device_id": "system", 00:04:16.878 "dma_device_type": 1 00:04:16.878 }, 00:04:16.878 { 00:04:16.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.878 "dma_device_type": 2 00:04:16.878 } 00:04:16.878 ], 00:04:16.878 "driver_specific": {} 00:04:16.878 }, 00:04:16.878 { 00:04:16.878 "name": "Passthru0", 00:04:16.878 "aliases": [ 00:04:16.878 "2f06cd4b-c80c-5e36-aacc-6c0fd7a063be" 00:04:16.878 ], 00:04:16.878 "product_name": "passthru", 00:04:16.878 "block_size": 512, 00:04:16.878 "num_blocks": 16384, 00:04:16.878 "uuid": "2f06cd4b-c80c-5e36-aacc-6c0fd7a063be", 00:04:16.878 "assigned_rate_limits": { 00:04:16.878 "rw_ios_per_sec": 0, 00:04:16.878 "rw_mbytes_per_sec": 0, 00:04:16.878 "r_mbytes_per_sec": 0, 00:04:16.878 "w_mbytes_per_sec": 0 00:04:16.878 }, 00:04:16.878 "claimed": false, 00:04:16.878 "zoned": false, 00:04:16.878 "supported_io_types": { 00:04:16.878 "read": true, 00:04:16.878 "write": true, 00:04:16.878 "unmap": true, 00:04:16.878 "flush": true, 00:04:16.878 "reset": true, 00:04:16.878 "nvme_admin": false, 00:04:16.878 "nvme_io": false, 00:04:16.878 "nvme_io_md": false, 00:04:16.878 "write_zeroes": true, 00:04:16.878 "zcopy": true, 00:04:16.878 "get_zone_info": false, 00:04:16.878 "zone_management": false, 00:04:16.878 "zone_append": false, 00:04:16.878 "compare": false, 00:04:16.878 "compare_and_write": false, 00:04:16.878 "abort": true, 00:04:16.878 "seek_hole": false, 00:04:16.878 "seek_data": false, 00:04:16.878 "copy": true, 00:04:16.878 "nvme_iov_md": false 00:04:16.878 }, 00:04:16.878 "memory_domains": [ 00:04:16.878 { 00:04:16.878 "dma_device_id": "system", 00:04:16.878 "dma_device_type": 1 00:04:16.878 }, 00:04:16.878 { 00:04:16.878 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.878 "dma_device_type": 2 00:04:16.878 } 00:04:16.878 ], 00:04:16.878 "driver_specific": { 00:04:16.878 "passthru": { 00:04:16.878 "name": "Passthru0", 00:04:16.878 "base_bdev_name": "Malloc2" 00:04:16.878 } 00:04:16.878 } 00:04:16.878 } 00:04:16.878 ]' 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.878 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.879 00:04:16.879 real 0m0.298s 00:04:16.879 user 0m0.187s 00:04:16.879 sys 0m0.041s 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.879 13:08:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.879 ************************************ 00:04:16.879 END TEST rpc_daemon_integrity 00:04:16.879 ************************************ 00:04:16.879 13:08:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.879 13:08:39 rpc -- rpc/rpc.sh@84 -- # killprocess 673645 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 673645 ']' 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@958 -- # kill -0 673645 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 673645 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 673645' 00:04:16.879 killing process with pid 673645 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@973 -- # kill 673645 00:04:16.879 13:08:39 rpc -- common/autotest_common.sh@978 -- # wait 673645 00:04:17.139 00:04:17.139 real 0m2.617s 00:04:17.139 user 0m3.389s 00:04:17.139 sys 0m0.758s 00:04:17.139 13:08:39 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.139 13:08:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.139 ************************************ 00:04:17.139 END TEST rpc 00:04:17.139 ************************************ 00:04:17.139 13:08:39 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.139 13:08:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.139 13:08:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.139 13:08:39 -- common/autotest_common.sh@10 -- # set +x 00:04:17.399 ************************************ 00:04:17.399 START TEST skip_rpc 00:04:17.399 ************************************ 00:04:17.399 13:08:39 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.399 * Looking for test storage... 00:04:17.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.399 13:08:39 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.399 13:08:39 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.400 13:08:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.400 --rc genhtml_branch_coverage=1 00:04:17.400 --rc genhtml_function_coverage=1 00:04:17.400 --rc genhtml_legend=1 00:04:17.400 --rc geninfo_all_blocks=1 00:04:17.400 --rc geninfo_unexecuted_blocks=1 00:04:17.400 00:04:17.400 ' 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.400 --rc genhtml_branch_coverage=1 00:04:17.400 --rc genhtml_function_coverage=1 00:04:17.400 --rc genhtml_legend=1 00:04:17.400 --rc geninfo_all_blocks=1 00:04:17.400 --rc geninfo_unexecuted_blocks=1 00:04:17.400 00:04:17.400 ' 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.400 --rc genhtml_branch_coverage=1 00:04:17.400 --rc genhtml_function_coverage=1 00:04:17.400 --rc genhtml_legend=1 00:04:17.400 --rc geninfo_all_blocks=1 00:04:17.400 --rc geninfo_unexecuted_blocks=1 00:04:17.400 00:04:17.400 ' 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.400 --rc genhtml_branch_coverage=1 00:04:17.400 --rc genhtml_function_coverage=1 00:04:17.400 --rc genhtml_legend=1 00:04:17.400 --rc geninfo_all_blocks=1 00:04:17.400 --rc geninfo_unexecuted_blocks=1 00:04:17.400 00:04:17.400 ' 00:04:17.400 13:08:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.400 13:08:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.400 13:08:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.400 13:08:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.400 ************************************ 00:04:17.400 START TEST skip_rpc 00:04:17.400 ************************************ 00:04:17.400 13:08:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:17.400 13:08:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=674283 00:04:17.400 13:08:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.400 13:08:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.400 13:08:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:17.662 [2024-12-05 13:08:40.011579] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:17.662 [2024-12-05 13:08:40.011638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid674283 ] 00:04:17.662 [2024-12-05 13:08:40.090729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.662 [2024-12-05 13:08:40.128559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 674283 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 674283 ']' 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 674283 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.977 13:08:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 674283 00:04:22.977 13:08:45 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.977 13:08:45 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.977 13:08:45 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 674283' 00:04:22.977 killing process with pid 674283 00:04:22.977 13:08:45 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 674283 00:04:22.977 13:08:45 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 674283 00:04:22.977 00:04:22.977 real 0m5.286s 00:04:22.977 user 0m5.087s 00:04:22.977 sys 0m0.241s 00:04:22.977 13:08:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.977 13:08:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 ************************************ 00:04:22.977 END TEST skip_rpc 00:04:22.977 ************************************ 00:04:22.977 13:08:45 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:22.977 13:08:45 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.977 13:08:45 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.977 13:08:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 ************************************ 00:04:22.977 START TEST skip_rpc_with_json 00:04:22.977 ************************************ 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=675358 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 675358 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 675358 ']' 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.977 13:08:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.977 [2024-12-05 13:08:45.370544] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:22.977 [2024-12-05 13:08:45.370598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid675358 ] 00:04:22.977 [2024-12-05 13:08:45.450114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.977 [2024-12-05 13:08:45.488918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.917 [2024-12-05 13:08:46.151816] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:23.917 request: 00:04:23.917 { 00:04:23.917 "trtype": "tcp", 00:04:23.917 "method": "nvmf_get_transports", 00:04:23.917 "req_id": 1 00:04:23.917 } 00:04:23.917 Got JSON-RPC error response 00:04:23.917 response: 00:04:23.917 { 00:04:23.917 "code": -19, 00:04:23.917 "message": "No such device" 00:04:23.917 } 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.917 [2024-12-05 13:08:46.159925] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.917 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.917 { 00:04:23.917 "subsystems": [ 00:04:23.917 { 00:04:23.917 "subsystem": "fsdev", 00:04:23.917 "config": [ 00:04:23.917 { 00:04:23.917 "method": "fsdev_set_opts", 00:04:23.917 "params": { 00:04:23.918 "fsdev_io_pool_size": 65535, 00:04:23.918 "fsdev_io_cache_size": 256 00:04:23.918 } 00:04:23.918 } 00:04:23.918 ] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "vfio_user_target", 00:04:23.918 "config": null 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "keyring", 00:04:23.918 "config": [] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "iobuf", 00:04:23.918 "config": [ 00:04:23.918 { 00:04:23.918 "method": "iobuf_set_options", 00:04:23.918 "params": { 00:04:23.918 "small_pool_count": 8192, 00:04:23.918 "large_pool_count": 1024, 00:04:23.918 "small_bufsize": 8192, 00:04:23.918 "large_bufsize": 135168, 00:04:23.918 "enable_numa": false 00:04:23.918 } 00:04:23.918 } 00:04:23.918 ] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "sock", 00:04:23.918 "config": [ 00:04:23.918 { 00:04:23.918 "method": "sock_set_default_impl", 00:04:23.918 "params": { 00:04:23.918 "impl_name": "posix" 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "sock_impl_set_options", 00:04:23.918 "params": { 00:04:23.918 "impl_name": "ssl", 00:04:23.918 "recv_buf_size": 4096, 00:04:23.918 "send_buf_size": 4096, 00:04:23.918 "enable_recv_pipe": true, 00:04:23.918 "enable_quickack": false, 00:04:23.918 "enable_placement_id": 0, 00:04:23.918 "enable_zerocopy_send_server": true, 00:04:23.918 "enable_zerocopy_send_client": false, 00:04:23.918 "zerocopy_threshold": 0, 00:04:23.918 "tls_version": 0, 00:04:23.918 "enable_ktls": false 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "sock_impl_set_options", 00:04:23.918 "params": { 00:04:23.918 "impl_name": "posix", 00:04:23.918 "recv_buf_size": 2097152, 00:04:23.918 "send_buf_size": 2097152, 00:04:23.918 "enable_recv_pipe": true, 00:04:23.918 "enable_quickack": false, 00:04:23.918 "enable_placement_id": 0, 00:04:23.918 "enable_zerocopy_send_server": true, 00:04:23.918 "enable_zerocopy_send_client": false, 00:04:23.918 "zerocopy_threshold": 0, 00:04:23.918 "tls_version": 0, 00:04:23.918 "enable_ktls": false 00:04:23.918 } 00:04:23.918 } 00:04:23.918 ] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "vmd", 00:04:23.918 "config": [] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "accel", 00:04:23.918 "config": [ 00:04:23.918 { 00:04:23.918 "method": "accel_set_options", 00:04:23.918 "params": { 00:04:23.918 "small_cache_size": 128, 00:04:23.918 "large_cache_size": 16, 00:04:23.918 "task_count": 2048, 00:04:23.918 "sequence_count": 2048, 00:04:23.918 "buf_count": 2048 00:04:23.918 } 00:04:23.918 } 00:04:23.918 ] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "bdev", 00:04:23.918 "config": [ 00:04:23.918 { 00:04:23.918 "method": "bdev_set_options", 00:04:23.918 "params": { 00:04:23.918 "bdev_io_pool_size": 65535, 00:04:23.918 "bdev_io_cache_size": 256, 00:04:23.918 "bdev_auto_examine": true, 00:04:23.918 "iobuf_small_cache_size": 128, 00:04:23.918 "iobuf_large_cache_size": 16 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "bdev_raid_set_options", 00:04:23.918 "params": { 00:04:23.918 "process_window_size_kb": 1024, 00:04:23.918 "process_max_bandwidth_mb_sec": 0 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "bdev_iscsi_set_options", 00:04:23.918 "params": { 00:04:23.918 "timeout_sec": 30 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "bdev_nvme_set_options", 00:04:23.918 "params": { 00:04:23.918 "action_on_timeout": "none", 00:04:23.918 "timeout_us": 0, 00:04:23.918 "timeout_admin_us": 0, 00:04:23.918 "keep_alive_timeout_ms": 10000, 00:04:23.918 "arbitration_burst": 0, 00:04:23.918 "low_priority_weight": 0, 00:04:23.918 "medium_priority_weight": 0, 00:04:23.918 "high_priority_weight": 0, 00:04:23.918 "nvme_adminq_poll_period_us": 10000, 00:04:23.918 "nvme_ioq_poll_period_us": 0, 00:04:23.918 "io_queue_requests": 0, 00:04:23.918 "delay_cmd_submit": true, 00:04:23.918 "transport_retry_count": 4, 00:04:23.918 "bdev_retry_count": 3, 00:04:23.918 "transport_ack_timeout": 0, 00:04:23.918 "ctrlr_loss_timeout_sec": 0, 00:04:23.918 "reconnect_delay_sec": 0, 00:04:23.918 "fast_io_fail_timeout_sec": 0, 00:04:23.918 "disable_auto_failback": false, 00:04:23.918 "generate_uuids": false, 00:04:23.918 "transport_tos": 0, 00:04:23.918 "nvme_error_stat": false, 00:04:23.918 "rdma_srq_size": 0, 00:04:23.918 "io_path_stat": false, 00:04:23.918 "allow_accel_sequence": false, 00:04:23.918 "rdma_max_cq_size": 0, 00:04:23.918 "rdma_cm_event_timeout_ms": 0, 00:04:23.918 "dhchap_digests": [ 00:04:23.918 "sha256", 00:04:23.918 "sha384", 00:04:23.918 "sha512" 00:04:23.918 ], 00:04:23.918 "dhchap_dhgroups": [ 00:04:23.918 "null", 00:04:23.918 "ffdhe2048", 00:04:23.918 "ffdhe3072", 00:04:23.918 "ffdhe4096", 00:04:23.918 "ffdhe6144", 00:04:23.918 "ffdhe8192" 00:04:23.918 ] 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "bdev_nvme_set_hotplug", 00:04:23.918 "params": { 00:04:23.918 "period_us": 100000, 00:04:23.918 "enable": false 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "bdev_wait_for_examine" 00:04:23.918 } 00:04:23.918 ] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "scsi", 00:04:23.918 "config": null 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "scheduler", 00:04:23.918 "config": [ 00:04:23.918 { 00:04:23.918 "method": "framework_set_scheduler", 00:04:23.918 "params": { 00:04:23.918 "name": "static" 00:04:23.918 } 00:04:23.918 } 00:04:23.918 ] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "vhost_scsi", 00:04:23.918 "config": [] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "vhost_blk", 00:04:23.918 "config": [] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "ublk", 00:04:23.918 "config": [] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "nbd", 00:04:23.918 "config": [] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "nvmf", 00:04:23.918 "config": [ 00:04:23.918 { 00:04:23.918 "method": "nvmf_set_config", 00:04:23.918 "params": { 00:04:23.918 "discovery_filter": "match_any", 00:04:23.918 "admin_cmd_passthru": { 00:04:23.918 "identify_ctrlr": false 00:04:23.918 }, 00:04:23.918 "dhchap_digests": [ 00:04:23.918 "sha256", 00:04:23.918 "sha384", 00:04:23.918 "sha512" 00:04:23.918 ], 00:04:23.918 "dhchap_dhgroups": [ 00:04:23.918 "null", 00:04:23.918 "ffdhe2048", 00:04:23.918 "ffdhe3072", 00:04:23.918 "ffdhe4096", 00:04:23.918 "ffdhe6144", 00:04:23.918 "ffdhe8192" 00:04:23.918 ] 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "nvmf_set_max_subsystems", 00:04:23.918 "params": { 00:04:23.918 "max_subsystems": 1024 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "nvmf_set_crdt", 00:04:23.918 "params": { 00:04:23.918 "crdt1": 0, 00:04:23.918 "crdt2": 0, 00:04:23.918 "crdt3": 0 00:04:23.918 } 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "method": "nvmf_create_transport", 00:04:23.918 "params": { 00:04:23.918 "trtype": "TCP", 00:04:23.918 "max_queue_depth": 128, 00:04:23.918 "max_io_qpairs_per_ctrlr": 127, 00:04:23.918 "in_capsule_data_size": 4096, 00:04:23.918 "max_io_size": 131072, 00:04:23.918 "io_unit_size": 131072, 00:04:23.918 "max_aq_depth": 128, 00:04:23.918 "num_shared_buffers": 511, 00:04:23.918 "buf_cache_size": 4294967295, 00:04:23.918 "dif_insert_or_strip": false, 00:04:23.918 "zcopy": false, 00:04:23.918 "c2h_success": true, 00:04:23.918 "sock_priority": 0, 00:04:23.918 "abort_timeout_sec": 1, 00:04:23.918 "ack_timeout": 0, 00:04:23.918 "data_wr_pool_size": 0 00:04:23.918 } 00:04:23.918 } 00:04:23.918 ] 00:04:23.918 }, 00:04:23.918 { 00:04:23.918 "subsystem": "iscsi", 00:04:23.918 "config": [ 00:04:23.918 { 00:04:23.918 "method": "iscsi_set_options", 00:04:23.918 "params": { 00:04:23.918 "node_base": "iqn.2016-06.io.spdk", 00:04:23.918 "max_sessions": 128, 00:04:23.918 "max_connections_per_session": 2, 00:04:23.918 "max_queue_depth": 64, 00:04:23.918 "default_time2wait": 2, 00:04:23.918 "default_time2retain": 20, 00:04:23.918 "first_burst_length": 8192, 00:04:23.918 "immediate_data": true, 00:04:23.918 "allow_duplicated_isid": false, 00:04:23.918 "error_recovery_level": 0, 00:04:23.918 "nop_timeout": 60, 00:04:23.918 "nop_in_interval": 30, 00:04:23.918 "disable_chap": false, 00:04:23.918 "require_chap": false, 00:04:23.918 "mutual_chap": false, 00:04:23.918 "chap_group": 0, 00:04:23.918 "max_large_datain_per_connection": 64, 00:04:23.918 "max_r2t_per_connection": 4, 00:04:23.918 "pdu_pool_size": 36864, 00:04:23.918 "immediate_data_pool_size": 16384, 00:04:23.918 "data_out_pool_size": 2048 00:04:23.918 } 00:04:23.919 } 00:04:23.919 ] 00:04:23.919 } 00:04:23.919 ] 00:04:23.919 } 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 675358 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 675358 ']' 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 675358 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675358 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675358' 00:04:23.919 killing process with pid 675358 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 675358 00:04:23.919 13:08:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 675358 00:04:24.179 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=675662 00:04:24.179 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:24.179 13:08:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 675662 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 675662 ']' 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 675662 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 675662 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 675662' 00:04:29.469 killing process with pid 675662 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 675662 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 675662 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.469 00:04:29.469 real 0m6.549s 00:04:29.469 user 0m6.405s 00:04:29.469 sys 0m0.553s 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.469 ************************************ 00:04:29.469 END TEST skip_rpc_with_json 00:04:29.469 ************************************ 00:04:29.469 13:08:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:29.469 13:08:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.469 13:08:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.469 13:08:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.469 ************************************ 00:04:29.469 START TEST skip_rpc_with_delay 00:04:29.469 ************************************ 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:29.469 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.470 13:08:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.470 [2024-12-05 13:08:51.997604] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:29.470 13:08:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:29.470 13:08:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.470 13:08:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.470 13:08:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.470 00:04:29.470 real 0m0.075s 00:04:29.470 user 0m0.045s 00:04:29.470 sys 0m0.029s 00:04:29.470 13:08:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.470 13:08:52 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:29.470 ************************************ 00:04:29.470 END TEST skip_rpc_with_delay 00:04:29.470 ************************************ 00:04:29.731 13:08:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:29.731 13:08:52 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:29.731 13:08:52 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:29.731 13:08:52 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.731 13:08:52 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.731 13:08:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.731 ************************************ 00:04:29.731 START TEST exit_on_failed_rpc_init 00:04:29.731 ************************************ 00:04:29.731 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:29.731 13:08:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=676788 00:04:29.731 13:08:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 676788 00:04:29.732 13:08:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.732 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 676788 ']' 00:04:29.732 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.732 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.732 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.732 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.732 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.732 [2024-12-05 13:08:52.148600] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:29.732 [2024-12-05 13:08:52.148652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid676788 ] 00:04:29.732 [2024-12-05 13:08:52.228372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.732 [2024-12-05 13:08:52.265080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:30.703 13:08:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:30.703 [2024-12-05 13:08:53.014047] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:30.703 [2024-12-05 13:08:53.014097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677059 ] 00:04:30.703 [2024-12-05 13:08:53.109293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.703 [2024-12-05 13:08:53.146368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:30.703 [2024-12-05 13:08:53.146422] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:30.703 [2024-12-05 13:08:53.146431] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:30.703 [2024-12-05 13:08:53.146438] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 676788 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 676788 ']' 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 676788 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 676788 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 676788' 00:04:30.703 killing process with pid 676788 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 676788 00:04:30.703 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 676788 00:04:30.964 00:04:30.964 real 0m1.368s 00:04:30.964 user 0m1.610s 00:04:30.964 sys 0m0.385s 00:04:30.964 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.964 13:08:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.964 ************************************ 00:04:30.964 END TEST exit_on_failed_rpc_init 00:04:30.964 ************************************ 00:04:30.964 13:08:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.964 00:04:30.964 real 0m13.777s 00:04:30.964 user 0m13.364s 00:04:30.964 sys 0m1.516s 00:04:30.964 13:08:53 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.964 13:08:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.964 ************************************ 00:04:30.964 END TEST skip_rpc 00:04:30.964 ************************************ 00:04:31.226 13:08:53 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:31.226 13:08:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.226 13:08:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.226 13:08:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.226 ************************************ 00:04:31.226 START TEST rpc_client 00:04:31.226 ************************************ 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:31.226 * Looking for test storage... 00:04:31.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.226 13:08:53 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.226 --rc genhtml_branch_coverage=1 00:04:31.226 --rc genhtml_function_coverage=1 00:04:31.226 --rc genhtml_legend=1 00:04:31.226 --rc geninfo_all_blocks=1 00:04:31.226 --rc geninfo_unexecuted_blocks=1 00:04:31.226 00:04:31.226 ' 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.226 --rc genhtml_branch_coverage=1 00:04:31.226 --rc genhtml_function_coverage=1 00:04:31.226 --rc genhtml_legend=1 00:04:31.226 --rc geninfo_all_blocks=1 00:04:31.226 --rc geninfo_unexecuted_blocks=1 00:04:31.226 00:04:31.226 ' 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.226 --rc genhtml_branch_coverage=1 00:04:31.226 --rc genhtml_function_coverage=1 00:04:31.226 --rc genhtml_legend=1 00:04:31.226 --rc geninfo_all_blocks=1 00:04:31.226 --rc geninfo_unexecuted_blocks=1 00:04:31.226 00:04:31.226 ' 00:04:31.226 13:08:53 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.226 --rc genhtml_branch_coverage=1 00:04:31.226 --rc genhtml_function_coverage=1 00:04:31.226 --rc genhtml_legend=1 00:04:31.226 --rc geninfo_all_blocks=1 00:04:31.226 --rc geninfo_unexecuted_blocks=1 00:04:31.226 00:04:31.226 ' 00:04:31.226 13:08:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:31.488 OK 00:04:31.488 13:08:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:31.488 00:04:31.488 real 0m0.223s 00:04:31.488 user 0m0.133s 00:04:31.488 sys 0m0.102s 00:04:31.488 13:08:53 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.488 13:08:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:31.488 ************************************ 00:04:31.488 END TEST rpc_client 00:04:31.488 ************************************ 00:04:31.488 13:08:53 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:31.488 13:08:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.488 13:08:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.488 13:08:53 -- common/autotest_common.sh@10 -- # set +x 00:04:31.488 ************************************ 00:04:31.488 START TEST json_config 00:04:31.488 ************************************ 00:04:31.488 13:08:53 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:31.488 13:08:53 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:31.488 13:08:53 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:31.488 13:08:53 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:31.488 13:08:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.488 13:08:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.488 13:08:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.488 13:08:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.488 13:08:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.488 13:08:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.488 13:08:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.488 13:08:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.488 13:08:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:31.488 13:08:54 json_config -- scripts/common.sh@345 -- # : 1 00:04:31.488 13:08:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.488 13:08:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.488 13:08:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:31.488 13:08:54 json_config -- scripts/common.sh@353 -- # local d=1 00:04:31.488 13:08:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.488 13:08:54 json_config -- scripts/common.sh@355 -- # echo 1 00:04:31.488 13:08:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.488 13:08:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@353 -- # local d=2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.488 13:08:54 json_config -- scripts/common.sh@355 -- # echo 2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.488 13:08:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.488 13:08:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.488 13:08:54 json_config -- scripts/common.sh@368 -- # return 0 00:04:31.488 13:08:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.488 13:08:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.488 --rc genhtml_branch_coverage=1 00:04:31.488 --rc genhtml_function_coverage=1 00:04:31.488 --rc genhtml_legend=1 00:04:31.488 --rc geninfo_all_blocks=1 00:04:31.488 --rc geninfo_unexecuted_blocks=1 00:04:31.488 00:04:31.488 ' 00:04:31.488 13:08:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.488 --rc genhtml_branch_coverage=1 00:04:31.488 --rc genhtml_function_coverage=1 00:04:31.488 --rc genhtml_legend=1 00:04:31.488 --rc geninfo_all_blocks=1 00:04:31.488 --rc geninfo_unexecuted_blocks=1 00:04:31.488 00:04:31.488 ' 00:04:31.488 13:08:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.488 --rc genhtml_branch_coverage=1 00:04:31.488 --rc genhtml_function_coverage=1 00:04:31.488 --rc genhtml_legend=1 00:04:31.488 --rc geninfo_all_blocks=1 00:04:31.488 --rc geninfo_unexecuted_blocks=1 00:04:31.488 00:04:31.488 ' 00:04:31.488 13:08:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:31.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.488 --rc genhtml_branch_coverage=1 00:04:31.488 --rc genhtml_function_coverage=1 00:04:31.488 --rc genhtml_legend=1 00:04:31.488 --rc geninfo_all_blocks=1 00:04:31.488 --rc geninfo_unexecuted_blocks=1 00:04:31.488 00:04:31.488 ' 00:04:31.488 13:08:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:31.488 13:08:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:31.750 13:08:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:31.750 13:08:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:31.750 13:08:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:31.750 13:08:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:31.750 13:08:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:31.750 13:08:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:31.750 13:08:54 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:31.750 13:08:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:31.750 13:08:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:31.750 13:08:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:31.750 13:08:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:31.750 13:08:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.750 13:08:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.750 13:08:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.750 13:08:54 json_config -- paths/export.sh@5 -- # export PATH 00:04:31.751 13:08:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@51 -- # : 0 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:31.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:31.751 13:08:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:31.751 INFO: JSON configuration test init 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.751 13:08:54 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:31.751 13:08:54 json_config -- json_config/common.sh@9 -- # local app=target 00:04:31.751 13:08:54 json_config -- json_config/common.sh@10 -- # shift 00:04:31.751 13:08:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:31.751 13:08:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:31.751 13:08:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:31.751 13:08:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.751 13:08:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:31.751 13:08:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=677448 00:04:31.751 13:08:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:31.751 Waiting for target to run... 00:04:31.751 13:08:54 json_config -- json_config/common.sh@25 -- # waitforlisten 677448 /var/tmp/spdk_tgt.sock 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 677448 ']' 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:31.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:31.751 13:08:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.751 13:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.751 [2024-12-05 13:08:54.151428] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:31.751 [2024-12-05 13:08:54.151484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid677448 ] 00:04:32.012 [2024-12-05 13:08:54.444592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.012 [2024-12-05 13:08:54.475626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.583 13:08:54 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.583 13:08:54 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:32.583 13:08:54 json_config -- json_config/common.sh@26 -- # echo '' 00:04:32.583 00:04:32.583 13:08:54 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:32.583 13:08:54 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:32.583 13:08:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.583 13:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.583 13:08:54 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:32.583 13:08:54 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:32.583 13:08:54 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:32.583 13:08:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.583 13:08:54 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:32.583 13:08:54 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:32.583 13:08:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:33.154 13:08:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.154 13:08:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:33.154 13:08:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:33.154 13:08:55 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:33.415 13:08:55 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:33.415 13:08:55 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:33.415 13:08:55 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@54 -- # sort 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:33.416 13:08:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:33.416 13:08:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:33.416 13:08:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:33.416 13:08:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.416 13:08:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:33.416 MallocForNvmf0 00:04:33.416 13:08:55 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.416 13:08:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:33.677 MallocForNvmf1 00:04:33.677 13:08:56 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.677 13:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:33.937 [2024-12-05 13:08:56.286493] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:33.938 13:08:56 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.938 13:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:33.938 13:08:56 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:33.938 13:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:34.198 13:08:56 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.198 13:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:34.459 13:08:56 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.459 13:08:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:34.459 [2024-12-05 13:08:56.944619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:34.459 13:08:56 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:34.459 13:08:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.459 13:08:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.459 13:08:56 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:34.459 13:08:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.459 13:08:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.721 13:08:57 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:34.721 13:08:57 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.721 13:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:34.721 MallocBdevForConfigChangeCheck 00:04:34.721 13:08:57 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:34.721 13:08:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:34.721 13:08:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.721 13:08:57 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:34.721 13:08:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.293 13:08:57 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:35.293 INFO: shutting down applications... 00:04:35.293 13:08:57 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:35.293 13:08:57 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:35.293 13:08:57 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:35.293 13:08:57 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:35.554 Calling clear_iscsi_subsystem 00:04:35.554 Calling clear_nvmf_subsystem 00:04:35.554 Calling clear_nbd_subsystem 00:04:35.554 Calling clear_ublk_subsystem 00:04:35.554 Calling clear_vhost_blk_subsystem 00:04:35.554 Calling clear_vhost_scsi_subsystem 00:04:35.554 Calling clear_bdev_subsystem 00:04:35.554 13:08:58 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:35.554 13:08:58 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:35.554 13:08:58 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:35.554 13:08:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.554 13:08:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:35.554 13:08:58 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:35.814 13:08:58 json_config -- json_config/json_config.sh@352 -- # break 00:04:35.814 13:08:58 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:35.814 13:08:58 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:35.814 13:08:58 json_config -- json_config/common.sh@31 -- # local app=target 00:04:35.814 13:08:58 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.814 13:08:58 json_config -- json_config/common.sh@35 -- # [[ -n 677448 ]] 00:04:35.814 13:08:58 json_config -- json_config/common.sh@38 -- # kill -SIGINT 677448 00:04:35.814 13:08:58 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.814 13:08:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.814 13:08:58 json_config -- json_config/common.sh@41 -- # kill -0 677448 00:04:35.814 13:08:58 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.385 13:08:58 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.385 13:08:58 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.385 13:08:58 json_config -- json_config/common.sh@41 -- # kill -0 677448 00:04:36.385 13:08:58 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.385 13:08:58 json_config -- json_config/common.sh@43 -- # break 00:04:36.385 13:08:58 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.385 13:08:58 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.385 SPDK target shutdown done 00:04:36.385 13:08:58 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:36.385 INFO: relaunching applications... 00:04:36.385 13:08:58 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.385 13:08:58 json_config -- json_config/common.sh@9 -- # local app=target 00:04:36.385 13:08:58 json_config -- json_config/common.sh@10 -- # shift 00:04:36.385 13:08:58 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.385 13:08:58 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.385 13:08:58 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.385 13:08:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.385 13:08:58 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.385 13:08:58 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=678500 00:04:36.385 13:08:58 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.385 Waiting for target to run... 00:04:36.385 13:08:58 json_config -- json_config/common.sh@25 -- # waitforlisten 678500 /var/tmp/spdk_tgt.sock 00:04:36.385 13:08:58 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.385 13:08:58 json_config -- common/autotest_common.sh@835 -- # '[' -z 678500 ']' 00:04:36.385 13:08:58 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.385 13:08:58 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.385 13:08:58 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.386 13:08:58 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.386 13:08:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.386 [2024-12-05 13:08:58.910546] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:36.386 [2024-12-05 13:08:58.910613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid678500 ] 00:04:36.646 [2024-12-05 13:08:59.190413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.907 [2024-12-05 13:08:59.220286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.478 [2024-12-05 13:08:59.740061] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.478 [2024-12-05 13:08:59.772443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:37.478 13:08:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.478 13:08:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:37.478 13:08:59 json_config -- json_config/common.sh@26 -- # echo '' 00:04:37.478 00:04:37.478 13:08:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:37.478 13:08:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:37.478 INFO: Checking if target configuration is the same... 00:04:37.478 13:08:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.478 13:08:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:37.478 13:08:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:37.478 + '[' 2 -ne 2 ']' 00:04:37.478 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:37.479 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:37.479 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:37.479 +++ basename /dev/fd/62 00:04:37.479 ++ mktemp /tmp/62.XXX 00:04:37.479 + tmp_file_1=/tmp/62.YEO 00:04:37.479 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:37.479 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:37.479 + tmp_file_2=/tmp/spdk_tgt_config.json.axj 00:04:37.479 + ret=0 00:04:37.479 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:37.740 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:37.740 + diff -u /tmp/62.YEO /tmp/spdk_tgt_config.json.axj 00:04:37.740 + echo 'INFO: JSON config files are the same' 00:04:37.740 INFO: JSON config files are the same 00:04:37.740 + rm /tmp/62.YEO /tmp/spdk_tgt_config.json.axj 00:04:37.740 + exit 0 00:04:37.740 13:09:00 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:37.740 13:09:00 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:37.740 INFO: changing configuration and checking if this can be detected... 00:04:37.740 13:09:00 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:37.740 13:09:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:38.000 13:09:00 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.000 13:09:00 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:38.000 13:09:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.000 + '[' 2 -ne 2 ']' 00:04:38.000 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:38.000 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:38.000 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:38.000 +++ basename /dev/fd/62 00:04:38.000 ++ mktemp /tmp/62.XXX 00:04:38.000 + tmp_file_1=/tmp/62.gKI 00:04:38.000 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.000 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:38.000 + tmp_file_2=/tmp/spdk_tgt_config.json.rkg 00:04:38.000 + ret=0 00:04:38.000 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.261 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:38.261 + diff -u /tmp/62.gKI /tmp/spdk_tgt_config.json.rkg 00:04:38.261 + ret=1 00:04:38.261 + echo '=== Start of file: /tmp/62.gKI ===' 00:04:38.261 + cat /tmp/62.gKI 00:04:38.261 + echo '=== End of file: /tmp/62.gKI ===' 00:04:38.261 + echo '' 00:04:38.261 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rkg ===' 00:04:38.261 + cat /tmp/spdk_tgt_config.json.rkg 00:04:38.261 + echo '=== End of file: /tmp/spdk_tgt_config.json.rkg ===' 00:04:38.261 + echo '' 00:04:38.261 + rm /tmp/62.gKI /tmp/spdk_tgt_config.json.rkg 00:04:38.261 + exit 1 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:38.261 INFO: configuration change detected. 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 678500 ]] 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.261 13:09:00 json_config -- json_config/json_config.sh@330 -- # killprocess 678500 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 678500 ']' 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@958 -- # kill -0 678500 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@959 -- # uname 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:38.261 13:09:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 678500 00:04:38.521 13:09:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:38.521 13:09:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:38.521 13:09:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 678500' 00:04:38.521 killing process with pid 678500 00:04:38.521 13:09:00 json_config -- common/autotest_common.sh@973 -- # kill 678500 00:04:38.521 13:09:00 json_config -- common/autotest_common.sh@978 -- # wait 678500 00:04:38.782 13:09:01 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:38.782 13:09:01 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:38.782 13:09:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:38.782 13:09:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.782 13:09:01 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:38.782 13:09:01 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:38.782 INFO: Success 00:04:38.782 00:04:38.782 real 0m7.339s 00:04:38.782 user 0m8.812s 00:04:38.782 sys 0m1.966s 00:04:38.782 13:09:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.782 13:09:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:38.782 ************************************ 00:04:38.782 END TEST json_config 00:04:38.782 ************************************ 00:04:38.782 13:09:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:38.782 13:09:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.782 13:09:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.782 13:09:01 -- common/autotest_common.sh@10 -- # set +x 00:04:38.782 ************************************ 00:04:38.782 START TEST json_config_extra_key 00:04:38.782 ************************************ 00:04:38.782 13:09:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.044 --rc genhtml_branch_coverage=1 00:04:39.044 --rc genhtml_function_coverage=1 00:04:39.044 --rc genhtml_legend=1 00:04:39.044 --rc geninfo_all_blocks=1 00:04:39.044 --rc geninfo_unexecuted_blocks=1 00:04:39.044 00:04:39.044 ' 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.044 --rc genhtml_branch_coverage=1 00:04:39.044 --rc genhtml_function_coverage=1 00:04:39.044 --rc genhtml_legend=1 00:04:39.044 --rc geninfo_all_blocks=1 00:04:39.044 --rc geninfo_unexecuted_blocks=1 00:04:39.044 00:04:39.044 ' 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.044 --rc genhtml_branch_coverage=1 00:04:39.044 --rc genhtml_function_coverage=1 00:04:39.044 --rc genhtml_legend=1 00:04:39.044 --rc geninfo_all_blocks=1 00:04:39.044 --rc geninfo_unexecuted_blocks=1 00:04:39.044 00:04:39.044 ' 00:04:39.044 13:09:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.044 --rc genhtml_branch_coverage=1 00:04:39.044 --rc genhtml_function_coverage=1 00:04:39.044 --rc genhtml_legend=1 00:04:39.044 --rc geninfo_all_blocks=1 00:04:39.044 --rc geninfo_unexecuted_blocks=1 00:04:39.044 00:04:39.044 ' 00:04:39.044 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.044 13:09:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.044 13:09:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.044 13:09:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.044 13:09:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.044 13:09:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:39.044 13:09:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.044 13:09:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.045 13:09:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:39.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:39.045 13:09:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:39.045 13:09:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:39.045 13:09:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:39.045 INFO: launching applications... 00:04:39.045 13:09:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=679212 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.045 Waiting for target to run... 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 679212 /var/tmp/spdk_tgt.sock 00:04:39.045 13:09:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 679212 ']' 00:04:39.045 13:09:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.045 13:09:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.045 13:09:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.045 13:09:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.045 13:09:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.045 13:09:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.045 [2024-12-05 13:09:01.556486] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:39.045 [2024-12-05 13:09:01.556561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679212 ] 00:04:39.306 [2024-12-05 13:09:01.839831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.306 [2024-12-05 13:09:01.870913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.878 13:09:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.878 13:09:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:39.878 00:04:39.878 13:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:39.878 INFO: shutting down applications... 00:04:39.878 13:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 679212 ]] 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 679212 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 679212 00:04:39.878 13:09:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.448 13:09:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.448 13:09:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.448 13:09:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 679212 00:04:40.448 13:09:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.449 13:09:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:40.449 13:09:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.449 13:09:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.449 SPDK target shutdown done 00:04:40.449 13:09:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:40.449 Success 00:04:40.449 00:04:40.449 real 0m1.558s 00:04:40.449 user 0m1.205s 00:04:40.449 sys 0m0.403s 00:04:40.449 13:09:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.449 13:09:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 ************************************ 00:04:40.449 END TEST json_config_extra_key 00:04:40.449 ************************************ 00:04:40.449 13:09:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.449 13:09:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.449 13:09:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.449 13:09:02 -- common/autotest_common.sh@10 -- # set +x 00:04:40.449 ************************************ 00:04:40.449 START TEST alias_rpc 00:04:40.449 ************************************ 00:04:40.449 13:09:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.449 * Looking for test storage... 00:04:40.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:40.449 13:09:03 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.449 13:09:03 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.449 13:09:03 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.710 13:09:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.710 --rc genhtml_branch_coverage=1 00:04:40.710 --rc genhtml_function_coverage=1 00:04:40.710 --rc genhtml_legend=1 00:04:40.710 --rc geninfo_all_blocks=1 00:04:40.710 --rc geninfo_unexecuted_blocks=1 00:04:40.710 00:04:40.710 ' 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.710 --rc genhtml_branch_coverage=1 00:04:40.710 --rc genhtml_function_coverage=1 00:04:40.710 --rc genhtml_legend=1 00:04:40.710 --rc geninfo_all_blocks=1 00:04:40.710 --rc geninfo_unexecuted_blocks=1 00:04:40.710 00:04:40.710 ' 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.710 --rc genhtml_branch_coverage=1 00:04:40.710 --rc genhtml_function_coverage=1 00:04:40.710 --rc genhtml_legend=1 00:04:40.710 --rc geninfo_all_blocks=1 00:04:40.710 --rc geninfo_unexecuted_blocks=1 00:04:40.710 00:04:40.710 ' 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.710 --rc genhtml_branch_coverage=1 00:04:40.710 --rc genhtml_function_coverage=1 00:04:40.710 --rc genhtml_legend=1 00:04:40.710 --rc geninfo_all_blocks=1 00:04:40.710 --rc geninfo_unexecuted_blocks=1 00:04:40.710 00:04:40.710 ' 00:04:40.710 13:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.710 13:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=679622 00:04:40.710 13:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 679622 00:04:40.710 13:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 679622 ']' 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.710 13:09:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.710 [2024-12-05 13:09:03.166845] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:40.710 [2024-12-05 13:09:03.166921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid679622 ] 00:04:40.710 [2024-12-05 13:09:03.253071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.972 [2024-12-05 13:09:03.293818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.543 13:09:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:41.543 13:09:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:41.543 13:09:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:41.803 13:09:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 679622 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 679622 ']' 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 679622 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 679622 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 679622' 00:04:41.803 killing process with pid 679622 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 679622 00:04:41.803 13:09:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 679622 00:04:42.065 00:04:42.065 real 0m1.535s 00:04:42.065 user 0m1.687s 00:04:42.065 sys 0m0.429s 00:04:42.065 13:09:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.065 13:09:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.065 ************************************ 00:04:42.065 END TEST alias_rpc 00:04:42.065 ************************************ 00:04:42.065 13:09:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:42.065 13:09:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.065 13:09:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.065 13:09:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.065 13:09:04 -- common/autotest_common.sh@10 -- # set +x 00:04:42.065 ************************************ 00:04:42.065 START TEST spdkcli_tcp 00:04:42.065 ************************************ 00:04:42.065 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.065 * Looking for test storage... 00:04:42.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:42.065 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.065 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.065 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.327 13:09:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.327 --rc genhtml_branch_coverage=1 00:04:42.327 --rc genhtml_function_coverage=1 00:04:42.327 --rc genhtml_legend=1 00:04:42.327 --rc geninfo_all_blocks=1 00:04:42.327 --rc geninfo_unexecuted_blocks=1 00:04:42.327 00:04:42.327 ' 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.327 --rc genhtml_branch_coverage=1 00:04:42.327 --rc genhtml_function_coverage=1 00:04:42.327 --rc genhtml_legend=1 00:04:42.327 --rc geninfo_all_blocks=1 00:04:42.327 --rc geninfo_unexecuted_blocks=1 00:04:42.327 00:04:42.327 ' 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.327 --rc genhtml_branch_coverage=1 00:04:42.327 --rc genhtml_function_coverage=1 00:04:42.327 --rc genhtml_legend=1 00:04:42.327 --rc geninfo_all_blocks=1 00:04:42.327 --rc geninfo_unexecuted_blocks=1 00:04:42.327 00:04:42.327 ' 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.327 --rc genhtml_branch_coverage=1 00:04:42.327 --rc genhtml_function_coverage=1 00:04:42.327 --rc genhtml_legend=1 00:04:42.327 --rc geninfo_all_blocks=1 00:04:42.327 --rc geninfo_unexecuted_blocks=1 00:04:42.327 00:04:42.327 ' 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=680025 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 680025 00:04:42.327 13:09:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 680025 ']' 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.327 13:09:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.327 [2024-12-05 13:09:04.792320] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:42.327 [2024-12-05 13:09:04.792392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680025 ] 00:04:42.327 [2024-12-05 13:09:04.878932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.589 [2024-12-05 13:09:04.921553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.589 [2024-12-05 13:09:04.921555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.160 13:09:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.160 13:09:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:43.160 13:09:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=680163 00:04:43.160 13:09:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.160 13:09:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.420 [ 00:04:43.420 "bdev_malloc_delete", 00:04:43.421 "bdev_malloc_create", 00:04:43.421 "bdev_null_resize", 00:04:43.421 "bdev_null_delete", 00:04:43.421 "bdev_null_create", 00:04:43.421 "bdev_nvme_cuse_unregister", 00:04:43.421 "bdev_nvme_cuse_register", 00:04:43.421 "bdev_opal_new_user", 00:04:43.421 "bdev_opal_set_lock_state", 00:04:43.421 "bdev_opal_delete", 00:04:43.421 "bdev_opal_get_info", 00:04:43.421 "bdev_opal_create", 00:04:43.421 "bdev_nvme_opal_revert", 00:04:43.421 "bdev_nvme_opal_init", 00:04:43.421 "bdev_nvme_send_cmd", 00:04:43.421 "bdev_nvme_set_keys", 00:04:43.421 "bdev_nvme_get_path_iostat", 00:04:43.421 "bdev_nvme_get_mdns_discovery_info", 00:04:43.421 "bdev_nvme_stop_mdns_discovery", 00:04:43.421 "bdev_nvme_start_mdns_discovery", 00:04:43.421 "bdev_nvme_set_multipath_policy", 00:04:43.421 "bdev_nvme_set_preferred_path", 00:04:43.421 "bdev_nvme_get_io_paths", 00:04:43.421 "bdev_nvme_remove_error_injection", 00:04:43.421 "bdev_nvme_add_error_injection", 00:04:43.421 "bdev_nvme_get_discovery_info", 00:04:43.421 "bdev_nvme_stop_discovery", 00:04:43.421 "bdev_nvme_start_discovery", 00:04:43.421 "bdev_nvme_get_controller_health_info", 00:04:43.421 "bdev_nvme_disable_controller", 00:04:43.421 "bdev_nvme_enable_controller", 00:04:43.421 "bdev_nvme_reset_controller", 00:04:43.421 "bdev_nvme_get_transport_statistics", 00:04:43.421 "bdev_nvme_apply_firmware", 00:04:43.421 "bdev_nvme_detach_controller", 00:04:43.421 "bdev_nvme_get_controllers", 00:04:43.421 "bdev_nvme_attach_controller", 00:04:43.421 "bdev_nvme_set_hotplug", 00:04:43.421 "bdev_nvme_set_options", 00:04:43.421 "bdev_passthru_delete", 00:04:43.421 "bdev_passthru_create", 00:04:43.421 "bdev_lvol_set_parent_bdev", 00:04:43.421 "bdev_lvol_set_parent", 00:04:43.421 "bdev_lvol_check_shallow_copy", 00:04:43.421 "bdev_lvol_start_shallow_copy", 00:04:43.421 "bdev_lvol_grow_lvstore", 00:04:43.421 "bdev_lvol_get_lvols", 00:04:43.421 "bdev_lvol_get_lvstores", 00:04:43.421 "bdev_lvol_delete", 00:04:43.421 "bdev_lvol_set_read_only", 00:04:43.421 "bdev_lvol_resize", 00:04:43.421 "bdev_lvol_decouple_parent", 00:04:43.421 "bdev_lvol_inflate", 00:04:43.421 "bdev_lvol_rename", 00:04:43.421 "bdev_lvol_clone_bdev", 00:04:43.421 "bdev_lvol_clone", 00:04:43.421 "bdev_lvol_snapshot", 00:04:43.421 "bdev_lvol_create", 00:04:43.421 "bdev_lvol_delete_lvstore", 00:04:43.421 "bdev_lvol_rename_lvstore", 00:04:43.421 "bdev_lvol_create_lvstore", 00:04:43.421 "bdev_raid_set_options", 00:04:43.421 "bdev_raid_remove_base_bdev", 00:04:43.421 "bdev_raid_add_base_bdev", 00:04:43.421 "bdev_raid_delete", 00:04:43.421 "bdev_raid_create", 00:04:43.421 "bdev_raid_get_bdevs", 00:04:43.421 "bdev_error_inject_error", 00:04:43.421 "bdev_error_delete", 00:04:43.421 "bdev_error_create", 00:04:43.421 "bdev_split_delete", 00:04:43.421 "bdev_split_create", 00:04:43.421 "bdev_delay_delete", 00:04:43.421 "bdev_delay_create", 00:04:43.421 "bdev_delay_update_latency", 00:04:43.421 "bdev_zone_block_delete", 00:04:43.421 "bdev_zone_block_create", 00:04:43.421 "blobfs_create", 00:04:43.421 "blobfs_detect", 00:04:43.421 "blobfs_set_cache_size", 00:04:43.421 "bdev_aio_delete", 00:04:43.421 "bdev_aio_rescan", 00:04:43.421 "bdev_aio_create", 00:04:43.421 "bdev_ftl_set_property", 00:04:43.421 "bdev_ftl_get_properties", 00:04:43.421 "bdev_ftl_get_stats", 00:04:43.421 "bdev_ftl_unmap", 00:04:43.421 "bdev_ftl_unload", 00:04:43.421 "bdev_ftl_delete", 00:04:43.421 "bdev_ftl_load", 00:04:43.421 "bdev_ftl_create", 00:04:43.421 "bdev_virtio_attach_controller", 00:04:43.421 "bdev_virtio_scsi_get_devices", 00:04:43.421 "bdev_virtio_detach_controller", 00:04:43.421 "bdev_virtio_blk_set_hotplug", 00:04:43.421 "bdev_iscsi_delete", 00:04:43.421 "bdev_iscsi_create", 00:04:43.421 "bdev_iscsi_set_options", 00:04:43.421 "accel_error_inject_error", 00:04:43.421 "ioat_scan_accel_module", 00:04:43.421 "dsa_scan_accel_module", 00:04:43.421 "iaa_scan_accel_module", 00:04:43.421 "vfu_virtio_create_fs_endpoint", 00:04:43.421 "vfu_virtio_create_scsi_endpoint", 00:04:43.421 "vfu_virtio_scsi_remove_target", 00:04:43.421 "vfu_virtio_scsi_add_target", 00:04:43.421 "vfu_virtio_create_blk_endpoint", 00:04:43.421 "vfu_virtio_delete_endpoint", 00:04:43.421 "keyring_file_remove_key", 00:04:43.421 "keyring_file_add_key", 00:04:43.421 "keyring_linux_set_options", 00:04:43.421 "fsdev_aio_delete", 00:04:43.421 "fsdev_aio_create", 00:04:43.421 "iscsi_get_histogram", 00:04:43.421 "iscsi_enable_histogram", 00:04:43.421 "iscsi_set_options", 00:04:43.421 "iscsi_get_auth_groups", 00:04:43.421 "iscsi_auth_group_remove_secret", 00:04:43.421 "iscsi_auth_group_add_secret", 00:04:43.421 "iscsi_delete_auth_group", 00:04:43.421 "iscsi_create_auth_group", 00:04:43.421 "iscsi_set_discovery_auth", 00:04:43.421 "iscsi_get_options", 00:04:43.421 "iscsi_target_node_request_logout", 00:04:43.421 "iscsi_target_node_set_redirect", 00:04:43.421 "iscsi_target_node_set_auth", 00:04:43.421 "iscsi_target_node_add_lun", 00:04:43.421 "iscsi_get_stats", 00:04:43.421 "iscsi_get_connections", 00:04:43.421 "iscsi_portal_group_set_auth", 00:04:43.421 "iscsi_start_portal_group", 00:04:43.421 "iscsi_delete_portal_group", 00:04:43.421 "iscsi_create_portal_group", 00:04:43.421 "iscsi_get_portal_groups", 00:04:43.421 "iscsi_delete_target_node", 00:04:43.421 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.421 "iscsi_target_node_add_pg_ig_maps", 00:04:43.421 "iscsi_create_target_node", 00:04:43.421 "iscsi_get_target_nodes", 00:04:43.421 "iscsi_delete_initiator_group", 00:04:43.421 "iscsi_initiator_group_remove_initiators", 00:04:43.421 "iscsi_initiator_group_add_initiators", 00:04:43.421 "iscsi_create_initiator_group", 00:04:43.421 "iscsi_get_initiator_groups", 00:04:43.421 "nvmf_set_crdt", 00:04:43.421 "nvmf_set_config", 00:04:43.421 "nvmf_set_max_subsystems", 00:04:43.421 "nvmf_stop_mdns_prr", 00:04:43.421 "nvmf_publish_mdns_prr", 00:04:43.421 "nvmf_subsystem_get_listeners", 00:04:43.421 "nvmf_subsystem_get_qpairs", 00:04:43.421 "nvmf_subsystem_get_controllers", 00:04:43.421 "nvmf_get_stats", 00:04:43.421 "nvmf_get_transports", 00:04:43.421 "nvmf_create_transport", 00:04:43.421 "nvmf_get_targets", 00:04:43.421 "nvmf_delete_target", 00:04:43.421 "nvmf_create_target", 00:04:43.421 "nvmf_subsystem_allow_any_host", 00:04:43.421 "nvmf_subsystem_set_keys", 00:04:43.421 "nvmf_subsystem_remove_host", 00:04:43.421 "nvmf_subsystem_add_host", 00:04:43.421 "nvmf_ns_remove_host", 00:04:43.421 "nvmf_ns_add_host", 00:04:43.421 "nvmf_subsystem_remove_ns", 00:04:43.421 "nvmf_subsystem_set_ns_ana_group", 00:04:43.421 "nvmf_subsystem_add_ns", 00:04:43.421 "nvmf_subsystem_listener_set_ana_state", 00:04:43.421 "nvmf_discovery_get_referrals", 00:04:43.421 "nvmf_discovery_remove_referral", 00:04:43.421 "nvmf_discovery_add_referral", 00:04:43.421 "nvmf_subsystem_remove_listener", 00:04:43.421 "nvmf_subsystem_add_listener", 00:04:43.421 "nvmf_delete_subsystem", 00:04:43.421 "nvmf_create_subsystem", 00:04:43.421 "nvmf_get_subsystems", 00:04:43.421 "env_dpdk_get_mem_stats", 00:04:43.421 "nbd_get_disks", 00:04:43.421 "nbd_stop_disk", 00:04:43.421 "nbd_start_disk", 00:04:43.421 "ublk_recover_disk", 00:04:43.421 "ublk_get_disks", 00:04:43.421 "ublk_stop_disk", 00:04:43.421 "ublk_start_disk", 00:04:43.421 "ublk_destroy_target", 00:04:43.421 "ublk_create_target", 00:04:43.421 "virtio_blk_create_transport", 00:04:43.421 "virtio_blk_get_transports", 00:04:43.421 "vhost_controller_set_coalescing", 00:04:43.421 "vhost_get_controllers", 00:04:43.421 "vhost_delete_controller", 00:04:43.421 "vhost_create_blk_controller", 00:04:43.421 "vhost_scsi_controller_remove_target", 00:04:43.421 "vhost_scsi_controller_add_target", 00:04:43.421 "vhost_start_scsi_controller", 00:04:43.421 "vhost_create_scsi_controller", 00:04:43.421 "thread_set_cpumask", 00:04:43.421 "scheduler_set_options", 00:04:43.421 "framework_get_governor", 00:04:43.421 "framework_get_scheduler", 00:04:43.421 "framework_set_scheduler", 00:04:43.421 "framework_get_reactors", 00:04:43.421 "thread_get_io_channels", 00:04:43.421 "thread_get_pollers", 00:04:43.421 "thread_get_stats", 00:04:43.421 "framework_monitor_context_switch", 00:04:43.421 "spdk_kill_instance", 00:04:43.421 "log_enable_timestamps", 00:04:43.421 "log_get_flags", 00:04:43.421 "log_clear_flag", 00:04:43.421 "log_set_flag", 00:04:43.421 "log_get_level", 00:04:43.421 "log_set_level", 00:04:43.421 "log_get_print_level", 00:04:43.421 "log_set_print_level", 00:04:43.421 "framework_enable_cpumask_locks", 00:04:43.421 "framework_disable_cpumask_locks", 00:04:43.421 "framework_wait_init", 00:04:43.421 "framework_start_init", 00:04:43.421 "scsi_get_devices", 00:04:43.421 "bdev_get_histogram", 00:04:43.421 "bdev_enable_histogram", 00:04:43.421 "bdev_set_qos_limit", 00:04:43.421 "bdev_set_qd_sampling_period", 00:04:43.421 "bdev_get_bdevs", 00:04:43.421 "bdev_reset_iostat", 00:04:43.421 "bdev_get_iostat", 00:04:43.421 "bdev_examine", 00:04:43.421 "bdev_wait_for_examine", 00:04:43.421 "bdev_set_options", 00:04:43.421 "accel_get_stats", 00:04:43.421 "accel_set_options", 00:04:43.421 "accel_set_driver", 00:04:43.421 "accel_crypto_key_destroy", 00:04:43.421 "accel_crypto_keys_get", 00:04:43.421 "accel_crypto_key_create", 00:04:43.421 "accel_assign_opc", 00:04:43.421 "accel_get_module_info", 00:04:43.421 "accel_get_opc_assignments", 00:04:43.421 "vmd_rescan", 00:04:43.422 "vmd_remove_device", 00:04:43.422 "vmd_enable", 00:04:43.422 "sock_get_default_impl", 00:04:43.422 "sock_set_default_impl", 00:04:43.422 "sock_impl_set_options", 00:04:43.422 "sock_impl_get_options", 00:04:43.422 "iobuf_get_stats", 00:04:43.422 "iobuf_set_options", 00:04:43.422 "keyring_get_keys", 00:04:43.422 "vfu_tgt_set_base_path", 00:04:43.422 "framework_get_pci_devices", 00:04:43.422 "framework_get_config", 00:04:43.422 "framework_get_subsystems", 00:04:43.422 "fsdev_set_opts", 00:04:43.422 "fsdev_get_opts", 00:04:43.422 "trace_get_info", 00:04:43.422 "trace_get_tpoint_group_mask", 00:04:43.422 "trace_disable_tpoint_group", 00:04:43.422 "trace_enable_tpoint_group", 00:04:43.422 "trace_clear_tpoint_mask", 00:04:43.422 "trace_set_tpoint_mask", 00:04:43.422 "notify_get_notifications", 00:04:43.422 "notify_get_types", 00:04:43.422 "spdk_get_version", 00:04:43.422 "rpc_get_methods" 00:04:43.422 ] 00:04:43.422 13:09:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.422 13:09:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.422 13:09:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 680025 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 680025 ']' 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 680025 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 680025 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 680025' 00:04:43.422 killing process with pid 680025 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 680025 00:04:43.422 13:09:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 680025 00:04:43.682 00:04:43.682 real 0m1.554s 00:04:43.682 user 0m2.813s 00:04:43.682 sys 0m0.472s 00:04:43.682 13:09:06 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.682 13:09:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.682 ************************************ 00:04:43.682 END TEST spdkcli_tcp 00:04:43.682 ************************************ 00:04:43.682 13:09:06 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.682 13:09:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.682 13:09:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.683 13:09:06 -- common/autotest_common.sh@10 -- # set +x 00:04:43.683 ************************************ 00:04:43.683 START TEST dpdk_mem_utility 00:04:43.683 ************************************ 00:04:43.683 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:43.683 * Looking for test storage... 00:04:43.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:43.683 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:43.683 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:43.683 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:43.943 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:43.943 13:09:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.943 13:09:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.943 13:09:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.943 13:09:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.943 13:09:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.943 13:09:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.943 13:09:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.944 13:09:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:43.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.944 --rc genhtml_branch_coverage=1 00:04:43.944 --rc genhtml_function_coverage=1 00:04:43.944 --rc genhtml_legend=1 00:04:43.944 --rc geninfo_all_blocks=1 00:04:43.944 --rc geninfo_unexecuted_blocks=1 00:04:43.944 00:04:43.944 ' 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:43.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.944 --rc genhtml_branch_coverage=1 00:04:43.944 --rc genhtml_function_coverage=1 00:04:43.944 --rc genhtml_legend=1 00:04:43.944 --rc geninfo_all_blocks=1 00:04:43.944 --rc geninfo_unexecuted_blocks=1 00:04:43.944 00:04:43.944 ' 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:43.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.944 --rc genhtml_branch_coverage=1 00:04:43.944 --rc genhtml_function_coverage=1 00:04:43.944 --rc genhtml_legend=1 00:04:43.944 --rc geninfo_all_blocks=1 00:04:43.944 --rc geninfo_unexecuted_blocks=1 00:04:43.944 00:04:43.944 ' 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:43.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.944 --rc genhtml_branch_coverage=1 00:04:43.944 --rc genhtml_function_coverage=1 00:04:43.944 --rc genhtml_legend=1 00:04:43.944 --rc geninfo_all_blocks=1 00:04:43.944 --rc geninfo_unexecuted_blocks=1 00:04:43.944 00:04:43.944 ' 00:04:43.944 13:09:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:43.944 13:09:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=680433 00:04:43.944 13:09:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 680433 00:04:43.944 13:09:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 680433 ']' 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.944 13:09:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.944 [2024-12-05 13:09:06.400624] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:43.944 [2024-12-05 13:09:06.400679] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680433 ] 00:04:43.944 [2024-12-05 13:09:06.481229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.205 [2024-12-05 13:09:06.520372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.777 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.777 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:44.777 13:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:44.777 13:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:44.777 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:44.777 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.777 { 00:04:44.777 "filename": "/tmp/spdk_mem_dump.txt" 00:04:44.777 } 00:04:44.777 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:44.777 13:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.777 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:44.777 1 heaps totaling size 818.000000 MiB 00:04:44.777 size: 818.000000 MiB heap id: 0 00:04:44.777 end heaps---------- 00:04:44.777 9 mempools totaling size 603.782043 MiB 00:04:44.777 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:44.777 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:44.777 size: 100.555481 MiB name: bdev_io_680433 00:04:44.777 size: 50.003479 MiB name: msgpool_680433 00:04:44.777 size: 36.509338 MiB name: fsdev_io_680433 00:04:44.777 size: 21.763794 MiB name: PDU_Pool 00:04:44.777 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:44.777 size: 4.133484 MiB name: evtpool_680433 00:04:44.777 size: 0.026123 MiB name: Session_Pool 00:04:44.777 end mempools------- 00:04:44.778 6 memzones totaling size 4.142822 MiB 00:04:44.778 size: 1.000366 MiB name: RG_ring_0_680433 00:04:44.778 size: 1.000366 MiB name: RG_ring_1_680433 00:04:44.778 size: 1.000366 MiB name: RG_ring_4_680433 00:04:44.778 size: 1.000366 MiB name: RG_ring_5_680433 00:04:44.778 size: 0.125366 MiB name: RG_ring_2_680433 00:04:44.778 size: 0.015991 MiB name: RG_ring_3_680433 00:04:44.778 end memzones------- 00:04:44.778 13:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:44.778 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:44.778 list of free elements. size: 10.852478 MiB 00:04:44.778 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:44.778 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:44.778 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:44.778 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:44.778 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:44.778 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:44.778 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:44.778 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:44.778 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:44.778 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:44.778 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:44.778 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:44.778 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:44.778 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:44.778 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:44.778 list of standard malloc elements. size: 199.218628 MiB 00:04:44.778 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:44.778 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:44.778 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:44.778 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:44.778 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:44.778 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:44.778 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:44.778 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:44.778 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:44.778 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:44.778 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:44.778 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:44.778 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:44.778 list of memzone associated elements. size: 607.928894 MiB 00:04:44.778 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:44.778 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:44.778 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:44.778 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:44.778 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:44.778 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_680433_0 00:04:44.778 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:44.778 associated memzone info: size: 48.002930 MiB name: MP_msgpool_680433_0 00:04:44.778 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:44.778 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_680433_0 00:04:44.778 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:44.778 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:44.778 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:44.778 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:44.778 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:44.778 associated memzone info: size: 3.000122 MiB name: MP_evtpool_680433_0 00:04:44.778 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:44.778 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_680433 00:04:44.778 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:44.778 associated memzone info: size: 1.007996 MiB name: MP_evtpool_680433 00:04:44.778 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:44.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:44.778 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:44.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:44.778 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:44.778 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:44.778 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:44.778 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:44.778 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:44.778 associated memzone info: size: 1.000366 MiB name: RG_ring_0_680433 00:04:44.778 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:44.778 associated memzone info: size: 1.000366 MiB name: RG_ring_1_680433 00:04:44.778 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:44.778 associated memzone info: size: 1.000366 MiB name: RG_ring_4_680433 00:04:44.778 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:44.778 associated memzone info: size: 1.000366 MiB name: RG_ring_5_680433 00:04:44.778 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:44.779 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_680433 00:04:44.779 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:44.779 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_680433 00:04:44.779 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:44.779 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:44.779 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:44.779 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:44.779 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:44.779 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:44.779 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:44.779 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_680433 00:04:44.779 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:44.779 associated memzone info: size: 0.125366 MiB name: RG_ring_2_680433 00:04:44.779 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:44.779 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:44.779 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:44.779 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:44.779 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:44.779 associated memzone info: size: 0.015991 MiB name: RG_ring_3_680433 00:04:44.779 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:44.779 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:44.779 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:44.779 associated memzone info: size: 0.000183 MiB name: MP_msgpool_680433 00:04:44.779 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:44.779 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_680433 00:04:44.779 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:44.779 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_680433 00:04:44.779 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:44.779 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:44.779 13:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:44.779 13:09:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 680433 00:04:44.779 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 680433 ']' 00:04:44.779 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 680433 00:04:44.779 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:44.779 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.779 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 680433 00:04:45.041 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.041 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.041 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 680433' 00:04:45.041 killing process with pid 680433 00:04:45.041 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 680433 00:04:45.041 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 680433 00:04:45.041 00:04:45.041 real 0m1.441s 00:04:45.041 user 0m1.552s 00:04:45.041 sys 0m0.398s 00:04:45.041 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.041 13:09:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.041 ************************************ 00:04:45.041 END TEST dpdk_mem_utility 00:04:45.041 ************************************ 00:04:45.302 13:09:07 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.302 13:09:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.302 13:09:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.302 13:09:07 -- common/autotest_common.sh@10 -- # set +x 00:04:45.302 ************************************ 00:04:45.302 START TEST event 00:04:45.302 ************************************ 00:04:45.302 13:09:07 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.302 * Looking for test storage... 00:04:45.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.303 13:09:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.303 13:09:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.303 13:09:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.303 13:09:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.303 13:09:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.303 13:09:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.303 13:09:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.303 13:09:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.303 13:09:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.303 13:09:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.303 13:09:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.303 13:09:07 event -- scripts/common.sh@344 -- # case "$op" in 00:04:45.303 13:09:07 event -- scripts/common.sh@345 -- # : 1 00:04:45.303 13:09:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.303 13:09:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.303 13:09:07 event -- scripts/common.sh@365 -- # decimal 1 00:04:45.303 13:09:07 event -- scripts/common.sh@353 -- # local d=1 00:04:45.303 13:09:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.303 13:09:07 event -- scripts/common.sh@355 -- # echo 1 00:04:45.303 13:09:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.303 13:09:07 event -- scripts/common.sh@366 -- # decimal 2 00:04:45.303 13:09:07 event -- scripts/common.sh@353 -- # local d=2 00:04:45.303 13:09:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.303 13:09:07 event -- scripts/common.sh@355 -- # echo 2 00:04:45.303 13:09:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.303 13:09:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.303 13:09:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.303 13:09:07 event -- scripts/common.sh@368 -- # return 0 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.303 --rc genhtml_branch_coverage=1 00:04:45.303 --rc genhtml_function_coverage=1 00:04:45.303 --rc genhtml_legend=1 00:04:45.303 --rc geninfo_all_blocks=1 00:04:45.303 --rc geninfo_unexecuted_blocks=1 00:04:45.303 00:04:45.303 ' 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.303 --rc genhtml_branch_coverage=1 00:04:45.303 --rc genhtml_function_coverage=1 00:04:45.303 --rc genhtml_legend=1 00:04:45.303 --rc geninfo_all_blocks=1 00:04:45.303 --rc geninfo_unexecuted_blocks=1 00:04:45.303 00:04:45.303 ' 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.303 --rc genhtml_branch_coverage=1 00:04:45.303 --rc genhtml_function_coverage=1 00:04:45.303 --rc genhtml_legend=1 00:04:45.303 --rc geninfo_all_blocks=1 00:04:45.303 --rc geninfo_unexecuted_blocks=1 00:04:45.303 00:04:45.303 ' 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.303 --rc genhtml_branch_coverage=1 00:04:45.303 --rc genhtml_function_coverage=1 00:04:45.303 --rc genhtml_legend=1 00:04:45.303 --rc geninfo_all_blocks=1 00:04:45.303 --rc geninfo_unexecuted_blocks=1 00:04:45.303 00:04:45.303 ' 00:04:45.303 13:09:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:45.303 13:09:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.303 13:09:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:45.303 13:09:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.303 13:09:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.564 ************************************ 00:04:45.564 START TEST event_perf 00:04:45.564 ************************************ 00:04:45.564 13:09:07 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.564 Running I/O for 1 seconds...[2024-12-05 13:09:07.911440] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:45.564 [2024-12-05 13:09:07.911542] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid680837 ] 00:04:45.564 [2024-12-05 13:09:07.999668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.564 [2024-12-05 13:09:08.044490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.564 [2024-12-05 13:09:08.044608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.564 [2024-12-05 13:09:08.044642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.564 [2024-12-05 13:09:08.044644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.505 Running I/O for 1 seconds... 00:04:46.505 lcore 0: 179352 00:04:46.505 lcore 1: 179349 00:04:46.505 lcore 2: 179348 00:04:46.505 lcore 3: 179350 00:04:46.765 done. 00:04:46.765 00:04:46.765 real 0m1.190s 00:04:46.765 user 0m4.109s 00:04:46.765 sys 0m0.076s 00:04:46.765 13:09:09 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.765 13:09:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.765 ************************************ 00:04:46.765 END TEST event_perf 00:04:46.765 ************************************ 00:04:46.765 13:09:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.765 13:09:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:46.765 13:09:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.765 13:09:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.765 ************************************ 00:04:46.765 START TEST event_reactor 00:04:46.765 ************************************ 00:04:46.765 13:09:09 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.765 [2024-12-05 13:09:09.176478] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:46.765 [2024-12-05 13:09:09.176575] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681323 ] 00:04:46.765 [2024-12-05 13:09:09.258298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.765 [2024-12-05 13:09:09.293371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.149 test_start 00:04:48.149 oneshot 00:04:48.149 tick 100 00:04:48.149 tick 100 00:04:48.149 tick 250 00:04:48.149 tick 100 00:04:48.149 tick 100 00:04:48.149 tick 100 00:04:48.149 tick 250 00:04:48.149 tick 500 00:04:48.149 tick 100 00:04:48.149 tick 100 00:04:48.149 tick 250 00:04:48.149 tick 100 00:04:48.150 tick 100 00:04:48.150 test_end 00:04:48.150 00:04:48.150 real 0m1.172s 00:04:48.150 user 0m1.094s 00:04:48.150 sys 0m0.074s 00:04:48.150 13:09:10 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.150 13:09:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:48.150 ************************************ 00:04:48.150 END TEST event_reactor 00:04:48.150 ************************************ 00:04:48.150 13:09:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.150 13:09:10 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:48.150 13:09:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.150 13:09:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.150 ************************************ 00:04:48.150 START TEST event_reactor_perf 00:04:48.150 ************************************ 00:04:48.150 13:09:10 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.150 [2024-12-05 13:09:10.426529] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:48.150 [2024-12-05 13:09:10.426634] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid681669 ] 00:04:48.150 [2024-12-05 13:09:10.508966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.150 [2024-12-05 13:09:10.544660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.107 test_start 00:04:49.107 test_end 00:04:49.107 Performance: 370851 events per second 00:04:49.107 00:04:49.107 real 0m1.171s 00:04:49.107 user 0m1.097s 00:04:49.107 sys 0m0.071s 00:04:49.107 13:09:11 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.107 13:09:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.107 ************************************ 00:04:49.107 END TEST event_reactor_perf 00:04:49.107 ************************************ 00:04:49.107 13:09:11 event -- event/event.sh@49 -- # uname -s 00:04:49.107 13:09:11 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.107 13:09:11 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.107 13:09:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.107 13:09:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.107 13:09:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.107 ************************************ 00:04:49.107 START TEST event_scheduler 00:04:49.107 ************************************ 00:04:49.107 13:09:11 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.368 * Looking for test storage... 00:04:49.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.368 13:09:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.368 --rc genhtml_branch_coverage=1 00:04:49.368 --rc genhtml_function_coverage=1 00:04:49.368 --rc genhtml_legend=1 00:04:49.368 --rc geninfo_all_blocks=1 00:04:49.368 --rc geninfo_unexecuted_blocks=1 00:04:49.368 00:04:49.368 ' 00:04:49.368 13:09:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.368 13:09:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=682068 00:04:49.368 13:09:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.368 13:09:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 682068 00:04:49.368 13:09:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 682068 ']' 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.368 13:09:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.368 [2024-12-05 13:09:11.907652] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:49.368 [2024-12-05 13:09:11.907708] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid682068 ] 00:04:49.629 [2024-12-05 13:09:11.973157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.629 [2024-12-05 13:09:12.003776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.629 [2024-12-05 13:09:12.003931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.629 [2024-12-05 13:09:12.004231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.629 [2024-12-05 13:09:12.004232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.629 13:09:12 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:49.630 13:09:12 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 [2024-12-05 13:09:12.048678] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:49.630 [2024-12-05 13:09:12.048691] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:49.630 [2024-12-05 13:09:12.048699] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:49.630 [2024-12-05 13:09:12.048703] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:49.630 [2024-12-05 13:09:12.048707] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.630 13:09:12 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 [2024-12-05 13:09:12.105755] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.630 13:09:12 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 ************************************ 00:04:49.630 START TEST scheduler_create_thread 00:04:49.630 ************************************ 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 2 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 3 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 4 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.630 5 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.630 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.892 6 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.892 7 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.892 8 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.892 9 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.892 10 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:49.892 13:09:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.833 13:09:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:50.833 13:09:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:50.833 13:09:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:50.833 13:09:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.219 13:09:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.219 13:09:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:52.219 13:09:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:52.220 13:09:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.220 13:09:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.165 13:09:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:53.165 00:04:53.165 real 0m3.383s 00:04:53.165 user 0m0.025s 00:04:53.165 sys 0m0.006s 00:04:53.165 13:09:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.165 13:09:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.165 ************************************ 00:04:53.165 END TEST scheduler_create_thread 00:04:53.165 ************************************ 00:04:53.165 13:09:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:53.165 13:09:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 682068 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 682068 ']' 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 682068 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 682068 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 682068' 00:04:53.165 killing process with pid 682068 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 682068 00:04:53.165 13:09:15 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 682068 00:04:53.424 [2024-12-05 13:09:15.909022] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:53.684 00:04:53.684 real 0m4.398s 00:04:53.684 user 0m7.596s 00:04:53.684 sys 0m0.366s 00:04:53.684 13:09:16 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.684 13:09:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.684 ************************************ 00:04:53.684 END TEST event_scheduler 00:04:53.684 ************************************ 00:04:53.684 13:09:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:53.684 13:09:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:53.684 13:09:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.684 13:09:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.684 13:09:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.684 ************************************ 00:04:53.684 START TEST app_repeat 00:04:53.684 ************************************ 00:04:53.684 13:09:16 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=683113 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 683113' 00:04:53.684 Process app_repeat pid: 683113 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:53.684 spdk_app_start Round 0 00:04:53.684 13:09:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 683113 /var/tmp/spdk-nbd.sock 00:04:53.684 13:09:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 683113 ']' 00:04:53.684 13:09:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.684 13:09:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.684 13:09:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.684 13:09:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.684 13:09:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.684 [2024-12-05 13:09:16.162672] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:04:53.684 [2024-12-05 13:09:16.162725] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid683113 ] 00:04:53.684 [2024-12-05 13:09:16.238796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.944 [2024-12-05 13:09:16.275872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.944 [2024-12-05 13:09:16.275882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.944 13:09:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.944 13:09:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:53.944 13:09:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.944 Malloc0 00:04:54.204 13:09:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.204 Malloc1 00:04:54.204 13:09:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.204 13:09:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.464 /dev/nbd0 00:04:54.464 13:09:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.465 13:09:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.465 1+0 records in 00:04:54.465 1+0 records out 00:04:54.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269178 s, 15.2 MB/s 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.465 13:09:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.465 13:09:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.465 13:09:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.465 13:09:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.725 /dev/nbd1 00:04:54.725 13:09:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.725 13:09:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:54.725 13:09:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.725 1+0 records in 00:04:54.725 1+0 records out 00:04:54.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000284148 s, 14.4 MB/s 00:04:54.726 13:09:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.726 13:09:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:54.726 13:09:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.726 13:09:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:54.726 13:09:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:54.726 13:09:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.726 13:09:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.726 13:09:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.726 13:09:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.726 13:09:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.064 13:09:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:55.064 { 00:04:55.064 "nbd_device": "/dev/nbd0", 00:04:55.064 "bdev_name": "Malloc0" 00:04:55.064 }, 00:04:55.064 { 00:04:55.065 "nbd_device": "/dev/nbd1", 00:04:55.065 "bdev_name": "Malloc1" 00:04:55.065 } 00:04:55.065 ]' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:55.065 { 00:04:55.065 "nbd_device": "/dev/nbd0", 00:04:55.065 "bdev_name": "Malloc0" 00:04:55.065 }, 00:04:55.065 { 00:04:55.065 "nbd_device": "/dev/nbd1", 00:04:55.065 "bdev_name": "Malloc1" 00:04:55.065 } 00:04:55.065 ]' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:55.065 /dev/nbd1' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:55.065 /dev/nbd1' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:55.065 256+0 records in 00:04:55.065 256+0 records out 00:04:55.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127664 s, 82.1 MB/s 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.065 256+0 records in 00:04:55.065 256+0 records out 00:04:55.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01648 s, 63.6 MB/s 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.065 256+0 records in 00:04:55.065 256+0 records out 00:04:55.065 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0173147 s, 60.6 MB/s 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.065 13:09:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.354 13:09:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.666 13:09:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.666 13:09:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.666 13:09:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.666 13:09:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.927 13:09:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.927 [2024-12-05 13:09:18.345829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.927 [2024-12-05 13:09:18.380759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.927 [2024-12-05 13:09:18.380761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.927 [2024-12-05 13:09:18.412794] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.927 [2024-12-05 13:09:18.412829] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.226 13:09:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:59.226 13:09:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:59.226 spdk_app_start Round 1 00:04:59.226 13:09:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 683113 /var/tmp/spdk-nbd.sock 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 683113 ']' 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:59.226 13:09:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:59.226 13:09:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.226 Malloc0 00:04:59.226 13:09:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.226 Malloc1 00:04:59.226 13:09:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.226 13:09:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.486 /dev/nbd0 00:04:59.486 13:09:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.486 13:09:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.486 1+0 records in 00:04:59.486 1+0 records out 00:04:59.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0088711 s, 462 kB/s 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.486 13:09:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.486 13:09:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.486 13:09:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.486 13:09:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.747 /dev/nbd1 00:04:59.747 13:09:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.747 13:09:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.747 1+0 records in 00:04:59.747 1+0 records out 00:04:59.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211982 s, 19.3 MB/s 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:59.747 13:09:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:59.747 13:09:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.747 13:09:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.747 13:09:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.747 13:09:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.747 13:09:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:00.007 { 00:05:00.007 "nbd_device": "/dev/nbd0", 00:05:00.007 "bdev_name": "Malloc0" 00:05:00.007 }, 00:05:00.007 { 00:05:00.007 "nbd_device": "/dev/nbd1", 00:05:00.007 "bdev_name": "Malloc1" 00:05:00.007 } 00:05:00.007 ]' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:00.007 { 00:05:00.007 "nbd_device": "/dev/nbd0", 00:05:00.007 "bdev_name": "Malloc0" 00:05:00.007 }, 00:05:00.007 { 00:05:00.007 "nbd_device": "/dev/nbd1", 00:05:00.007 "bdev_name": "Malloc1" 00:05:00.007 } 00:05:00.007 ]' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:00.007 /dev/nbd1' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:00.007 /dev/nbd1' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:00.007 256+0 records in 00:05:00.007 256+0 records out 00:05:00.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0029948 s, 350 MB/s 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:00.007 256+0 records in 00:05:00.007 256+0 records out 00:05:00.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01666 s, 62.9 MB/s 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:00.007 256+0 records in 00:05:00.007 256+0 records out 00:05:00.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178934 s, 58.6 MB/s 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.007 13:09:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.267 13:09:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.527 13:09:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.527 13:09:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.527 13:09:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.527 13:09:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.798 13:09:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.798 13:09:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.798 13:09:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.058 [2024-12-05 13:09:23.400461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.058 [2024-12-05 13:09:23.435106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.058 [2024-12-05 13:09:23.435195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.058 [2024-12-05 13:09:23.467869] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.058 [2024-12-05 13:09:23.467906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:04.354 13:09:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:04.354 13:09:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:04.354 spdk_app_start Round 2 00:05:04.354 13:09:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 683113 /var/tmp/spdk-nbd.sock 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 683113 ']' 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:04.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.354 13:09:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:04.354 13:09:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.354 Malloc0 00:05:04.354 13:09:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.354 Malloc1 00:05:04.354 13:09:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.354 13:09:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.615 /dev/nbd0 00:05:04.615 13:09:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.615 13:09:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.615 1+0 records in 00:05:04.615 1+0 records out 00:05:04.615 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025126 s, 16.3 MB/s 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.615 13:09:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.615 13:09:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.615 13:09:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.615 13:09:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.876 /dev/nbd1 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.876 1+0 records in 00:05:04.876 1+0 records out 00:05:04.876 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210478 s, 19.5 MB/s 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:04.876 13:09:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.876 13:09:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.876 { 00:05:04.876 "nbd_device": "/dev/nbd0", 00:05:04.876 "bdev_name": "Malloc0" 00:05:04.876 }, 00:05:04.876 { 00:05:04.876 "nbd_device": "/dev/nbd1", 00:05:04.876 "bdev_name": "Malloc1" 00:05:04.876 } 00:05:04.876 ]' 00:05:04.877 13:09:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.877 { 00:05:04.877 "nbd_device": "/dev/nbd0", 00:05:04.877 "bdev_name": "Malloc0" 00:05:04.877 }, 00:05:04.877 { 00:05:04.877 "nbd_device": "/dev/nbd1", 00:05:04.877 "bdev_name": "Malloc1" 00:05:04.877 } 00:05:04.877 ]' 00:05:04.877 13:09:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.139 /dev/nbd1' 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.139 /dev/nbd1' 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.139 13:09:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.140 256+0 records in 00:05:05.140 256+0 records out 00:05:05.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121046 s, 86.6 MB/s 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.140 256+0 records in 00:05:05.140 256+0 records out 00:05:05.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0184067 s, 57.0 MB/s 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.140 256+0 records in 00:05:05.140 256+0 records out 00:05:05.140 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0177608 s, 59.0 MB/s 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.140 13:09:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.402 13:09:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.684 13:09:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.685 13:09:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.685 13:09:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.685 13:09:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.945 13:09:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.945 [2024-12-05 13:09:28.443018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.945 [2024-12-05 13:09:28.477601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.945 [2024-12-05 13:09:28.477604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.945 [2024-12-05 13:09:28.509619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.945 [2024-12-05 13:09:28.509655] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.251 13:09:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 683113 /var/tmp/spdk-nbd.sock 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 683113 ']' 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:09.251 13:09:31 event.app_repeat -- event/event.sh@39 -- # killprocess 683113 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 683113 ']' 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 683113 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 683113 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 683113' 00:05:09.251 killing process with pid 683113 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 683113 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 683113 00:05:09.251 spdk_app_start is called in Round 0. 00:05:09.251 Shutdown signal received, stop current app iteration 00:05:09.251 Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 reinitialization... 00:05:09.251 spdk_app_start is called in Round 1. 00:05:09.251 Shutdown signal received, stop current app iteration 00:05:09.251 Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 reinitialization... 00:05:09.251 spdk_app_start is called in Round 2. 00:05:09.251 Shutdown signal received, stop current app iteration 00:05:09.251 Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 reinitialization... 00:05:09.251 spdk_app_start is called in Round 3. 00:05:09.251 Shutdown signal received, stop current app iteration 00:05:09.251 13:09:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.251 13:09:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.251 00:05:09.251 real 0m15.524s 00:05:09.251 user 0m33.791s 00:05:09.251 sys 0m2.226s 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.251 13:09:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.251 ************************************ 00:05:09.251 END TEST app_repeat 00:05:09.251 ************************************ 00:05:09.251 13:09:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.251 13:09:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.251 13:09:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.251 13:09:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.251 13:09:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.251 ************************************ 00:05:09.251 START TEST cpu_locks 00:05:09.251 ************************************ 00:05:09.251 13:09:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.513 * Looking for test storage... 00:05:09.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.513 13:09:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.513 --rc genhtml_branch_coverage=1 00:05:09.513 --rc genhtml_function_coverage=1 00:05:09.513 --rc genhtml_legend=1 00:05:09.513 --rc geninfo_all_blocks=1 00:05:09.513 --rc geninfo_unexecuted_blocks=1 00:05:09.513 00:05:09.513 ' 00:05:09.513 13:09:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.513 13:09:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.513 13:09:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.513 13:09:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.513 13:09:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.514 ************************************ 00:05:09.514 START TEST default_locks 00:05:09.514 ************************************ 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=686384 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 686384 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 686384 ']' 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.514 13:09:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.514 [2024-12-05 13:09:32.032147] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:09.514 [2024-12-05 13:09:32.032195] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686384 ] 00:05:09.774 [2024-12-05 13:09:32.112542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.774 [2024-12-05 13:09:32.148447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.347 13:09:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.347 13:09:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:10.347 13:09:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 686384 00:05:10.347 13:09:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 686384 00:05:10.347 13:09:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.919 lslocks: write error 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 686384 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 686384 ']' 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 686384 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 686384 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 686384' 00:05:10.919 killing process with pid 686384 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 686384 00:05:10.919 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 686384 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 686384 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 686384 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 686384 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 686384 ']' 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (686384) - No such process 00:05:11.181 ERROR: process (pid: 686384) is no longer running 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.181 00:05:11.181 real 0m1.636s 00:05:11.181 user 0m1.774s 00:05:11.181 sys 0m0.554s 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.181 13:09:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.181 ************************************ 00:05:11.181 END TEST default_locks 00:05:11.181 ************************************ 00:05:11.181 13:09:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:11.181 13:09:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.181 13:09:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.181 13:09:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.181 ************************************ 00:05:11.181 START TEST default_locks_via_rpc 00:05:11.181 ************************************ 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=686754 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 686754 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 686754 ']' 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.181 13:09:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.181 [2024-12-05 13:09:33.742169] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:11.181 [2024-12-05 13:09:33.742218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid686754 ] 00:05:11.442 [2024-12-05 13:09:33.820389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.442 [2024-12-05 13:09:33.856398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.013 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.013 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:12.013 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:12.013 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 686754 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 686754 00:05:12.014 13:09:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 686754 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 686754 ']' 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 686754 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 686754 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 686754' 00:05:12.588 killing process with pid 686754 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 686754 00:05:12.588 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 686754 00:05:12.848 00:05:12.848 real 0m1.636s 00:05:12.848 user 0m1.785s 00:05:12.848 sys 0m0.543s 00:05:12.848 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.848 13:09:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.848 ************************************ 00:05:12.849 END TEST default_locks_via_rpc 00:05:12.849 ************************************ 00:05:12.849 13:09:35 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:12.849 13:09:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.849 13:09:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.849 13:09:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.849 ************************************ 00:05:12.849 START TEST non_locking_app_on_locked_coremask 00:05:12.849 ************************************ 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=687125 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 687125 /var/tmp/spdk.sock 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 687125 ']' 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.849 13:09:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.109 [2024-12-05 13:09:35.452018] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:13.109 [2024-12-05 13:09:35.452067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687125 ] 00:05:13.109 [2024-12-05 13:09:35.530115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.109 [2024-12-05 13:09:35.564032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=687376 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 687376 /var/tmp/spdk2.sock 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 687376 ']' 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.680 13:09:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.941 [2024-12-05 13:09:36.287670] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:13.941 [2024-12-05 13:09:36.287725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687376 ] 00:05:13.941 [2024-12-05 13:09:36.411622] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:13.941 [2024-12-05 13:09:36.411655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.941 [2024-12-05 13:09:36.484330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.514 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.514 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.514 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 687125 00:05:14.514 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 687125 00:05:14.514 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.085 lslocks: write error 00:05:15.085 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 687125 00:05:15.085 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 687125 ']' 00:05:15.085 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 687125 00:05:15.085 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.086 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.086 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687125 00:05:15.346 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.346 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.346 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687125' 00:05:15.346 killing process with pid 687125 00:05:15.346 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 687125 00:05:15.346 13:09:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 687125 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 687376 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 687376 ']' 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 687376 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687376 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687376' 00:05:15.607 killing process with pid 687376 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 687376 00:05:15.607 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 687376 00:05:15.868 00:05:15.868 real 0m2.962s 00:05:15.868 user 0m3.281s 00:05:15.868 sys 0m0.888s 00:05:15.868 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.868 13:09:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.868 ************************************ 00:05:15.868 END TEST non_locking_app_on_locked_coremask 00:05:15.868 ************************************ 00:05:15.868 13:09:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.868 13:09:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.868 13:09:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.868 13:09:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.868 ************************************ 00:05:15.868 START TEST locking_app_on_unlocked_coremask 00:05:15.868 ************************************ 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=687831 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 687831 /var/tmp/spdk.sock 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 687831 ']' 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.868 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.128 [2024-12-05 13:09:38.487700] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:16.128 [2024-12-05 13:09:38.487747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687831 ] 00:05:16.128 [2024-12-05 13:09:38.565175] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.128 [2024-12-05 13:09:38.565208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.128 [2024-12-05 13:09:38.600451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=687838 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 687838 /var/tmp/spdk2.sock 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 687838 ']' 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.388 13:09:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.388 [2024-12-05 13:09:38.846097] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:16.388 [2024-12-05 13:09:38.846146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid687838 ] 00:05:16.650 [2024-12-05 13:09:38.967501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.650 [2024-12-05 13:09:39.040464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.221 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.221 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:17.221 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 687838 00:05:17.221 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 687838 00:05:17.221 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.483 lslocks: write error 00:05:17.483 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 687831 00:05:17.483 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 687831 ']' 00:05:17.483 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 687831 00:05:17.483 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:17.483 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.483 13:09:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687831 00:05:17.483 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.483 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.483 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687831' 00:05:17.483 killing process with pid 687831 00:05:17.483 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 687831 00:05:17.483 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 687831 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 687838 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 687838 ']' 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 687838 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 687838 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 687838' 00:05:18.056 killing process with pid 687838 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 687838 00:05:18.056 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 687838 00:05:18.317 00:05:18.317 real 0m2.278s 00:05:18.317 user 0m2.505s 00:05:18.317 sys 0m0.799s 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.317 ************************************ 00:05:18.317 END TEST locking_app_on_unlocked_coremask 00:05:18.317 ************************************ 00:05:18.317 13:09:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:18.317 13:09:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.317 13:09:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.317 13:09:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.317 ************************************ 00:05:18.317 START TEST locking_app_on_locked_coremask 00:05:18.317 ************************************ 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=688214 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 688214 /var/tmp/spdk.sock 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 688214 ']' 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.317 13:09:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.317 [2024-12-05 13:09:40.852116] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:18.317 [2024-12-05 13:09:40.852170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688214 ] 00:05:18.579 [2024-12-05 13:09:40.933285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.579 [2024-12-05 13:09:40.971224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=688545 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 688545 /var/tmp/spdk2.sock 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 688545 /var/tmp/spdk2.sock 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 688545 /var/tmp/spdk2.sock 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 688545 ']' 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.152 13:09:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.152 [2024-12-05 13:09:41.695308] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:19.152 [2024-12-05 13:09:41.695360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688545 ] 00:05:19.412 [2024-12-05 13:09:41.819937] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 688214 has claimed it. 00:05:19.413 [2024-12-05 13:09:41.819981] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:19.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (688545) - No such process 00:05:19.986 ERROR: process (pid: 688545) is no longer running 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 688214 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 688214 00:05:19.986 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.248 lslocks: write error 00:05:20.248 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 688214 00:05:20.248 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 688214 ']' 00:05:20.248 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 688214 00:05:20.248 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:20.248 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.248 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688214 00:05:20.509 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.509 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.509 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688214' 00:05:20.509 killing process with pid 688214 00:05:20.509 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 688214 00:05:20.509 13:09:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 688214 00:05:20.509 00:05:20.509 real 0m2.257s 00:05:20.509 user 0m2.521s 00:05:20.509 sys 0m0.644s 00:05:20.509 13:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.509 13:09:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.509 ************************************ 00:05:20.509 END TEST locking_app_on_locked_coremask 00:05:20.509 ************************************ 00:05:20.771 13:09:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.771 13:09:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.771 13:09:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.771 13:09:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.771 ************************************ 00:05:20.771 START TEST locking_overlapped_coremask 00:05:20.771 ************************************ 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=688903 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 688903 /var/tmp/spdk.sock 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 688903 ']' 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.771 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.771 [2024-12-05 13:09:43.178441] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:20.771 [2024-12-05 13:09:43.178497] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688903 ] 00:05:20.771 [2024-12-05 13:09:43.259026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.771 [2024-12-05 13:09:43.300912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.771 [2024-12-05 13:09:43.301080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.771 [2024-12-05 13:09:43.301084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=688924 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 688924 /var/tmp/spdk2.sock 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 688924 /var/tmp/spdk2.sock 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 688924 /var/tmp/spdk2.sock 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 688924 ']' 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.710 13:09:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.710 [2024-12-05 13:09:44.034707] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:21.710 [2024-12-05 13:09:44.034760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid688924 ] 00:05:21.710 [2024-12-05 13:09:44.132548] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 688903 has claimed it. 00:05:21.710 [2024-12-05 13:09:44.132581] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:22.280 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (688924) - No such process 00:05:22.280 ERROR: process (pid: 688924) is no longer running 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 688903 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 688903 ']' 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 688903 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 688903 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 688903' 00:05:22.280 killing process with pid 688903 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 688903 00:05:22.280 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 688903 00:05:22.540 00:05:22.540 real 0m1.808s 00:05:22.541 user 0m5.232s 00:05:22.541 sys 0m0.381s 00:05:22.541 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.541 13:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.541 ************************************ 00:05:22.541 END TEST locking_overlapped_coremask 00:05:22.541 ************************************ 00:05:22.541 13:09:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:22.541 13:09:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.541 13:09:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.541 13:09:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:22.541 ************************************ 00:05:22.541 START TEST locking_overlapped_coremask_via_rpc 00:05:22.541 ************************************ 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=689284 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 689284 /var/tmp/spdk.sock 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 689284 ']' 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.541 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.541 [2024-12-05 13:09:45.058735] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:22.541 [2024-12-05 13:09:45.058788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689284 ] 00:05:22.801 [2024-12-05 13:09:45.137732] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:22.801 [2024-12-05 13:09:45.137763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.801 [2024-12-05 13:09:45.179097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.801 [2024-12-05 13:09:45.179220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.801 [2024-12-05 13:09:45.179223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=689317 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 689317 /var/tmp/spdk2.sock 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 689317 ']' 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.369 13:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.369 [2024-12-05 13:09:45.914301] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:23.369 [2024-12-05 13:09:45.914353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689317 ] 00:05:23.629 [2024-12-05 13:09:46.013251] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.629 [2024-12-05 13:09:46.013274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.629 [2024-12-05 13:09:46.072427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:23.629 [2024-12-05 13:09:46.075984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.629 [2024-12-05 13:09:46.075987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:24.198 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.199 [2024-12-05 13:09:46.712923] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 689284 has claimed it. 00:05:24.199 request: 00:05:24.199 { 00:05:24.199 "method": "framework_enable_cpumask_locks", 00:05:24.199 "req_id": 1 00:05:24.199 } 00:05:24.199 Got JSON-RPC error response 00:05:24.199 response: 00:05:24.199 { 00:05:24.199 "code": -32603, 00:05:24.199 "message": "Failed to claim CPU core: 2" 00:05:24.199 } 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 689284 /var/tmp/spdk.sock 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 689284 ']' 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.199 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.458 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 689317 /var/tmp/spdk2.sock 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 689317 ']' 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.459 13:09:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:24.719 00:05:24.719 real 0m2.077s 00:05:24.719 user 0m0.849s 00:05:24.719 sys 0m0.156s 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.719 13:09:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.719 ************************************ 00:05:24.719 END TEST locking_overlapped_coremask_via_rpc 00:05:24.719 ************************************ 00:05:24.719 13:09:47 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:24.719 13:09:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 689284 ]] 00:05:24.719 13:09:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 689284 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 689284 ']' 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 689284 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689284 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689284' 00:05:24.719 killing process with pid 689284 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 689284 00:05:24.719 13:09:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 689284 00:05:24.981 13:09:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 689317 ]] 00:05:24.981 13:09:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 689317 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 689317 ']' 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 689317 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 689317 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 689317' 00:05:24.981 killing process with pid 689317 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 689317 00:05:24.981 13:09:47 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 689317 00:05:25.242 13:09:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.242 13:09:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:25.242 13:09:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 689284 ]] 00:05:25.242 13:09:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 689284 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 689284 ']' 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 689284 00:05:25.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (689284) - No such process 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 689284 is not found' 00:05:25.242 Process with pid 689284 is not found 00:05:25.242 13:09:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 689317 ]] 00:05:25.242 13:09:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 689317 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 689317 ']' 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 689317 00:05:25.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (689317) - No such process 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 689317 is not found' 00:05:25.242 Process with pid 689317 is not found 00:05:25.242 13:09:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:25.242 00:05:25.242 real 0m15.923s 00:05:25.242 user 0m28.098s 00:05:25.242 sys 0m4.892s 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.242 13:09:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.242 ************************************ 00:05:25.242 END TEST cpu_locks 00:05:25.242 ************************************ 00:05:25.242 00:05:25.242 real 0m40.045s 00:05:25.242 user 1m16.078s 00:05:25.242 sys 0m8.112s 00:05:25.242 13:09:47 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.242 13:09:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.242 ************************************ 00:05:25.242 END TEST event 00:05:25.242 ************************************ 00:05:25.242 13:09:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:25.242 13:09:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.242 13:09:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.242 13:09:47 -- common/autotest_common.sh@10 -- # set +x 00:05:25.242 ************************************ 00:05:25.242 START TEST thread 00:05:25.243 ************************************ 00:05:25.243 13:09:47 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:25.505 * Looking for test storage... 00:05:25.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:25.505 13:09:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.505 13:09:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.505 13:09:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.505 13:09:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.505 13:09:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.505 13:09:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.505 13:09:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.505 13:09:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.505 13:09:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.505 13:09:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.505 13:09:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.505 13:09:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:25.505 13:09:47 thread -- scripts/common.sh@345 -- # : 1 00:05:25.505 13:09:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.505 13:09:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.505 13:09:47 thread -- scripts/common.sh@365 -- # decimal 1 00:05:25.505 13:09:47 thread -- scripts/common.sh@353 -- # local d=1 00:05:25.505 13:09:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.505 13:09:47 thread -- scripts/common.sh@355 -- # echo 1 00:05:25.505 13:09:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.505 13:09:47 thread -- scripts/common.sh@366 -- # decimal 2 00:05:25.505 13:09:47 thread -- scripts/common.sh@353 -- # local d=2 00:05:25.505 13:09:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.505 13:09:47 thread -- scripts/common.sh@355 -- # echo 2 00:05:25.505 13:09:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.505 13:09:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.505 13:09:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.505 13:09:47 thread -- scripts/common.sh@368 -- # return 0 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:25.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.505 --rc genhtml_branch_coverage=1 00:05:25.505 --rc genhtml_function_coverage=1 00:05:25.505 --rc genhtml_legend=1 00:05:25.505 --rc geninfo_all_blocks=1 00:05:25.505 --rc geninfo_unexecuted_blocks=1 00:05:25.505 00:05:25.505 ' 00:05:25.505 13:09:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.505 13:09:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.505 ************************************ 00:05:25.505 START TEST thread_poller_perf 00:05:25.505 ************************************ 00:05:25.505 13:09:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:25.505 [2024-12-05 13:09:48.038228] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:25.505 [2024-12-05 13:09:48.038343] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid689979 ] 00:05:25.766 [2024-12-05 13:09:48.123634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.766 [2024-12-05 13:09:48.165967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.766 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:26.707 [2024-12-05T12:09:49.275Z] ====================================== 00:05:26.707 [2024-12-05T12:09:49.275Z] busy:2410327332 (cyc) 00:05:26.707 [2024-12-05T12:09:49.275Z] total_run_count: 286000 00:05:26.707 [2024-12-05T12:09:49.275Z] tsc_hz: 2400000000 (cyc) 00:05:26.707 [2024-12-05T12:09:49.275Z] ====================================== 00:05:26.707 [2024-12-05T12:09:49.275Z] poller_cost: 8427 (cyc), 3511 (nsec) 00:05:26.707 00:05:26.707 real 0m1.191s 00:05:26.707 user 0m1.120s 00:05:26.707 sys 0m0.068s 00:05:26.707 13:09:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.707 13:09:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:26.707 ************************************ 00:05:26.707 END TEST thread_poller_perf 00:05:26.707 ************************************ 00:05:26.707 13:09:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:26.707 13:09:49 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:26.707 13:09:49 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.707 13:09:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.968 ************************************ 00:05:26.968 START TEST thread_poller_perf 00:05:26.968 ************************************ 00:05:26.968 13:09:49 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:26.968 [2024-12-05 13:09:49.306716] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:26.968 [2024-12-05 13:09:49.306815] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690120 ] 00:05:26.968 [2024-12-05 13:09:49.401996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.968 [2024-12-05 13:09:49.441490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.968 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:27.909 [2024-12-05T12:09:50.477Z] ====================================== 00:05:27.909 [2024-12-05T12:09:50.477Z] busy:2402206280 (cyc) 00:05:27.909 [2024-12-05T12:09:50.477Z] total_run_count: 3811000 00:05:27.909 [2024-12-05T12:09:50.477Z] tsc_hz: 2400000000 (cyc) 00:05:27.910 [2024-12-05T12:09:50.478Z] ====================================== 00:05:27.910 [2024-12-05T12:09:50.478Z] poller_cost: 630 (cyc), 262 (nsec) 00:05:27.910 00:05:27.910 real 0m1.191s 00:05:27.910 user 0m1.111s 00:05:27.910 sys 0m0.076s 00:05:27.910 13:09:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.910 13:09:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.910 ************************************ 00:05:27.910 END TEST thread_poller_perf 00:05:27.910 ************************************ 00:05:28.170 13:09:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:28.170 00:05:28.170 real 0m2.740s 00:05:28.170 user 0m2.390s 00:05:28.170 sys 0m0.362s 00:05:28.170 13:09:50 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.170 13:09:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.170 ************************************ 00:05:28.170 END TEST thread 00:05:28.170 ************************************ 00:05:28.170 13:09:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:28.170 13:09:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.170 13:09:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.170 13:09:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.170 13:09:50 -- common/autotest_common.sh@10 -- # set +x 00:05:28.170 ************************************ 00:05:28.170 START TEST app_cmdline 00:05:28.170 ************************************ 00:05:28.170 13:09:50 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.170 * Looking for test storage... 00:05:28.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:28.170 13:09:50 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.170 13:09:50 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.170 13:09:50 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.432 13:09:50 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.432 13:09:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:28.432 13:09:50 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.432 13:09:50 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.432 --rc genhtml_branch_coverage=1 00:05:28.432 --rc genhtml_function_coverage=1 00:05:28.432 --rc genhtml_legend=1 00:05:28.432 --rc geninfo_all_blocks=1 00:05:28.432 --rc geninfo_unexecuted_blocks=1 00:05:28.432 00:05:28.432 ' 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.433 --rc genhtml_branch_coverage=1 00:05:28.433 --rc genhtml_function_coverage=1 00:05:28.433 --rc genhtml_legend=1 00:05:28.433 --rc geninfo_all_blocks=1 00:05:28.433 --rc geninfo_unexecuted_blocks=1 00:05:28.433 00:05:28.433 ' 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.433 --rc genhtml_branch_coverage=1 00:05:28.433 --rc genhtml_function_coverage=1 00:05:28.433 --rc genhtml_legend=1 00:05:28.433 --rc geninfo_all_blocks=1 00:05:28.433 --rc geninfo_unexecuted_blocks=1 00:05:28.433 00:05:28.433 ' 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.433 --rc genhtml_branch_coverage=1 00:05:28.433 --rc genhtml_function_coverage=1 00:05:28.433 --rc genhtml_legend=1 00:05:28.433 --rc geninfo_all_blocks=1 00:05:28.433 --rc geninfo_unexecuted_blocks=1 00:05:28.433 00:05:28.433 ' 00:05:28.433 13:09:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:28.433 13:09:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=690503 00:05:28.433 13:09:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 690503 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 690503 ']' 00:05:28.433 13:09:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.433 13:09:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.433 [2024-12-05 13:09:50.842296] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:28.433 [2024-12-05 13:09:50.842348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid690503 ] 00:05:28.433 [2024-12-05 13:09:50.921376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.433 [2024-12-05 13:09:50.957590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.375 13:09:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.375 13:09:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:29.375 { 00:05:29.375 "version": "SPDK v25.01-pre git sha1 0ee529aeb", 00:05:29.375 "fields": { 00:05:29.375 "major": 25, 00:05:29.375 "minor": 1, 00:05:29.375 "patch": 0, 00:05:29.375 "suffix": "-pre", 00:05:29.375 "commit": "0ee529aeb" 00:05:29.375 } 00:05:29.375 } 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:29.375 13:09:51 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.375 13:09:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:29.375 13:09:51 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:29.375 13:09:51 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.376 13:09:51 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:29.376 13:09:51 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:29.376 13:09:51 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:29.376 13:09:51 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.636 request: 00:05:29.636 { 00:05:29.636 "method": "env_dpdk_get_mem_stats", 00:05:29.636 "req_id": 1 00:05:29.636 } 00:05:29.636 Got JSON-RPC error response 00:05:29.636 response: 00:05:29.636 { 00:05:29.636 "code": -32601, 00:05:29.636 "message": "Method not found" 00:05:29.636 } 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.636 13:09:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 690503 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 690503 ']' 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 690503 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 690503 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 690503' 00:05:29.636 killing process with pid 690503 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 690503 00:05:29.636 13:09:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 690503 00:05:29.897 00:05:29.897 real 0m1.731s 00:05:29.897 user 0m2.089s 00:05:29.897 sys 0m0.442s 00:05:29.897 13:09:52 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.897 13:09:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.897 ************************************ 00:05:29.897 END TEST app_cmdline 00:05:29.897 ************************************ 00:05:29.897 13:09:52 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:29.897 13:09:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.897 13:09:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.897 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:29.897 ************************************ 00:05:29.897 START TEST version 00:05:29.897 ************************************ 00:05:29.897 13:09:52 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:30.158 * Looking for test storage... 00:05:30.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.158 13:09:52 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.158 13:09:52 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.158 13:09:52 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.158 13:09:52 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.158 13:09:52 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.158 13:09:52 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.158 13:09:52 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.158 13:09:52 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.158 13:09:52 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.158 13:09:52 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.158 13:09:52 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.158 13:09:52 version -- scripts/common.sh@344 -- # case "$op" in 00:05:30.158 13:09:52 version -- scripts/common.sh@345 -- # : 1 00:05:30.158 13:09:52 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.158 13:09:52 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.158 13:09:52 version -- scripts/common.sh@365 -- # decimal 1 00:05:30.158 13:09:52 version -- scripts/common.sh@353 -- # local d=1 00:05:30.158 13:09:52 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.158 13:09:52 version -- scripts/common.sh@355 -- # echo 1 00:05:30.158 13:09:52 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.158 13:09:52 version -- scripts/common.sh@366 -- # decimal 2 00:05:30.158 13:09:52 version -- scripts/common.sh@353 -- # local d=2 00:05:30.158 13:09:52 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.158 13:09:52 version -- scripts/common.sh@355 -- # echo 2 00:05:30.158 13:09:52 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.158 13:09:52 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.158 13:09:52 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.158 13:09:52 version -- scripts/common.sh@368 -- # return 0 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.158 --rc genhtml_branch_coverage=1 00:05:30.158 --rc genhtml_function_coverage=1 00:05:30.158 --rc genhtml_legend=1 00:05:30.158 --rc geninfo_all_blocks=1 00:05:30.158 --rc geninfo_unexecuted_blocks=1 00:05:30.158 00:05:30.158 ' 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.158 --rc genhtml_branch_coverage=1 00:05:30.158 --rc genhtml_function_coverage=1 00:05:30.158 --rc genhtml_legend=1 00:05:30.158 --rc geninfo_all_blocks=1 00:05:30.158 --rc geninfo_unexecuted_blocks=1 00:05:30.158 00:05:30.158 ' 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.158 --rc genhtml_branch_coverage=1 00:05:30.158 --rc genhtml_function_coverage=1 00:05:30.158 --rc genhtml_legend=1 00:05:30.158 --rc geninfo_all_blocks=1 00:05:30.158 --rc geninfo_unexecuted_blocks=1 00:05:30.158 00:05:30.158 ' 00:05:30.158 13:09:52 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.158 --rc genhtml_branch_coverage=1 00:05:30.158 --rc genhtml_function_coverage=1 00:05:30.158 --rc genhtml_legend=1 00:05:30.158 --rc geninfo_all_blocks=1 00:05:30.158 --rc geninfo_unexecuted_blocks=1 00:05:30.158 00:05:30.158 ' 00:05:30.158 13:09:52 version -- app/version.sh@17 -- # get_header_version major 00:05:30.158 13:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.158 13:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.158 13:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.158 13:09:52 version -- app/version.sh@17 -- # major=25 00:05:30.158 13:09:52 version -- app/version.sh@18 -- # get_header_version minor 00:05:30.158 13:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.158 13:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.158 13:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.158 13:09:52 version -- app/version.sh@18 -- # minor=1 00:05:30.158 13:09:52 version -- app/version.sh@19 -- # get_header_version patch 00:05:30.158 13:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.158 13:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.158 13:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.159 13:09:52 version -- app/version.sh@19 -- # patch=0 00:05:30.159 13:09:52 version -- app/version.sh@20 -- # get_header_version suffix 00:05:30.159 13:09:52 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.159 13:09:52 version -- app/version.sh@14 -- # cut -f2 00:05:30.159 13:09:52 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.159 13:09:52 version -- app/version.sh@20 -- # suffix=-pre 00:05:30.159 13:09:52 version -- app/version.sh@22 -- # version=25.1 00:05:30.159 13:09:52 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:30.159 13:09:52 version -- app/version.sh@28 -- # version=25.1rc0 00:05:30.159 13:09:52 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:30.159 13:09:52 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:30.159 13:09:52 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:30.159 13:09:52 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:30.159 00:05:30.159 real 0m0.273s 00:05:30.159 user 0m0.154s 00:05:30.159 sys 0m0.168s 00:05:30.159 13:09:52 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.159 13:09:52 version -- common/autotest_common.sh@10 -- # set +x 00:05:30.159 ************************************ 00:05:30.159 END TEST version 00:05:30.159 ************************************ 00:05:30.159 13:09:52 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:30.159 13:09:52 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:30.159 13:09:52 -- spdk/autotest.sh@194 -- # uname -s 00:05:30.159 13:09:52 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:30.159 13:09:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:30.159 13:09:52 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:30.159 13:09:52 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:30.159 13:09:52 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:30.159 13:09:52 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:30.159 13:09:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.159 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.420 13:09:52 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:30.420 13:09:52 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:30.420 13:09:52 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:30.420 13:09:52 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:30.420 13:09:52 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:30.420 13:09:52 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:30.420 13:09:52 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.420 13:09:52 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:30.420 13:09:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.420 13:09:52 -- common/autotest_common.sh@10 -- # set +x 00:05:30.420 ************************************ 00:05:30.420 START TEST nvmf_tcp 00:05:30.420 ************************************ 00:05:30.420 13:09:52 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.420 * Looking for test storage... 00:05:30.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.420 13:09:52 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.420 13:09:52 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.420 13:09:52 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.420 13:09:52 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.420 13:09:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.420 13:09:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.420 13:09:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.420 13:09:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.420 13:09:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.421 13:09:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.683 13:09:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:30.683 13:09:52 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.683 13:09:52 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:52 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:53 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:30.683 13:09:53 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.683 13:09:53 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.683 13:09:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:30.683 13:09:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.683 13:09:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.683 ************************************ 00:05:30.683 START TEST nvmf_target_core 00:05:30.683 ************************************ 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.683 * Looking for test storage... 00:05:30.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.683 --rc genhtml_branch_coverage=1 00:05:30.683 --rc genhtml_function_coverage=1 00:05:30.683 --rc genhtml_legend=1 00:05:30.683 --rc geninfo_all_blocks=1 00:05:30.683 --rc geninfo_unexecuted_blocks=1 00:05:30.683 00:05:30.683 ' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.683 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:30.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:30.945 ************************************ 00:05:30.945 START TEST nvmf_abort 00:05:30.945 ************************************ 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:30.945 * Looking for test storage... 00:05:30.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:30.945 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:30.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.946 --rc genhtml_branch_coverage=1 00:05:30.946 --rc genhtml_function_coverage=1 00:05:30.946 --rc genhtml_legend=1 00:05:30.946 --rc geninfo_all_blocks=1 00:05:30.946 --rc geninfo_unexecuted_blocks=1 00:05:30.946 00:05:30.946 ' 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.946 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.207 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:31.207 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:31.208 13:09:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:39.348 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:39.349 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:39.349 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:39.349 Found net devices under 0000:31:00.0: cvl_0_0 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:39.349 Found net devices under 0000:31:00.1: cvl_0_1 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:39.349 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:39.349 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:39.349 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:05:39.350 00:05:39.350 --- 10.0.0.2 ping statistics --- 00:05:39.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.350 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:39.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:39.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:05:39.350 00:05:39.350 --- 10.0.0.1 ping statistics --- 00:05:39.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.350 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=695637 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 695637 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 695637 ']' 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.350 13:10:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.610 [2024-12-05 13:10:01.951187] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:39.610 [2024-12-05 13:10:01.951256] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:39.610 [2024-12-05 13:10:02.057109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.610 [2024-12-05 13:10:02.099880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:39.610 [2024-12-05 13:10:02.099925] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:39.610 [2024-12-05 13:10:02.099934] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.610 [2024-12-05 13:10:02.099941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.610 [2024-12-05 13:10:02.099947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:39.610 [2024-12-05 13:10:02.101515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.610 [2024-12-05 13:10:02.101675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.610 [2024-12-05 13:10:02.101675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.181 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.181 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:40.181 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:40.181 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.181 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.446 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:40.446 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.447 [2024-12-05 13:10:02.790725] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.447 Malloc0 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.447 Delay0 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.447 [2024-12-05 13:10:02.877565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.447 13:10:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:40.709 [2024-12-05 13:10:03.049935] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:43.250 Initializing NVMe Controllers 00:05:43.250 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:43.250 controller IO queue size 128 less than required 00:05:43.250 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:43.250 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:43.250 Initialization complete. Launching workers. 00:05:43.250 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28935 00:05:43.250 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28996, failed to submit 62 00:05:43.250 success 28939, unsuccessful 57, failed 0 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:43.250 rmmod nvme_tcp 00:05:43.250 rmmod nvme_fabrics 00:05:43.250 rmmod nvme_keyring 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 695637 ']' 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 695637 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 695637 ']' 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 695637 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 695637 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 695637' 00:05:43.250 killing process with pid 695637 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 695637 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 695637 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:43.250 13:10:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.239 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:45.239 00:05:45.239 real 0m14.323s 00:05:45.239 user 0m14.732s 00:05:45.239 sys 0m7.158s 00:05:45.239 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.239 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:45.239 ************************************ 00:05:45.239 END TEST nvmf_abort 00:05:45.239 ************************************ 00:05:45.240 13:10:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:45.240 13:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:45.240 13:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.240 13:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:45.240 ************************************ 00:05:45.240 START TEST nvmf_ns_hotplug_stress 00:05:45.240 ************************************ 00:05:45.240 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:45.240 * Looking for test storage... 00:05:45.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:45.499 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.500 --rc genhtml_branch_coverage=1 00:05:45.500 --rc genhtml_function_coverage=1 00:05:45.500 --rc genhtml_legend=1 00:05:45.500 --rc geninfo_all_blocks=1 00:05:45.500 --rc geninfo_unexecuted_blocks=1 00:05:45.500 00:05:45.500 ' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.500 --rc genhtml_branch_coverage=1 00:05:45.500 --rc genhtml_function_coverage=1 00:05:45.500 --rc genhtml_legend=1 00:05:45.500 --rc geninfo_all_blocks=1 00:05:45.500 --rc geninfo_unexecuted_blocks=1 00:05:45.500 00:05:45.500 ' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.500 --rc genhtml_branch_coverage=1 00:05:45.500 --rc genhtml_function_coverage=1 00:05:45.500 --rc genhtml_legend=1 00:05:45.500 --rc geninfo_all_blocks=1 00:05:45.500 --rc geninfo_unexecuted_blocks=1 00:05:45.500 00:05:45.500 ' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:45.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.500 --rc genhtml_branch_coverage=1 00:05:45.500 --rc genhtml_function_coverage=1 00:05:45.500 --rc genhtml_legend=1 00:05:45.500 --rc geninfo_all_blocks=1 00:05:45.500 --rc geninfo_unexecuted_blocks=1 00:05:45.500 00:05:45.500 ' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:45.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:45.500 13:10:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:53.642 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:05:53.643 Found 0000:31:00.0 (0x8086 - 0x159b) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:05:53.643 Found 0000:31:00.1 (0x8086 - 0x159b) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:05:53.643 Found net devices under 0000:31:00.0: cvl_0_0 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:05:53.643 Found net devices under 0000:31:00.1: cvl_0_1 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:53.643 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:53.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:53.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:05:53.905 00:05:53.905 --- 10.0.0.2 ping statistics --- 00:05:53.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.905 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:53.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:53.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:05:53.905 00:05:53.905 --- 10.0.0.1 ping statistics --- 00:05:53.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:53.905 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=701084 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 701084 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 701084 ']' 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.905 13:10:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:54.167 [2024-12-05 13:10:16.517552] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:05:54.167 [2024-12-05 13:10:16.517622] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:54.167 [2024-12-05 13:10:16.625802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.167 [2024-12-05 13:10:16.676677] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:54.167 [2024-12-05 13:10:16.676725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:54.167 [2024-12-05 13:10:16.676739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:54.167 [2024-12-05 13:10:16.676746] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:54.167 [2024-12-05 13:10:16.676752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:54.167 [2024-12-05 13:10:16.678850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.167 [2024-12-05 13:10:16.679017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:54.167 [2024-12-05 13:10:16.679144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:55.110 [2024-12-05 13:10:17.516406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.110 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:55.371 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:55.371 [2024-12-05 13:10:17.869752] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:55.371 13:10:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:55.633 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:55.893 Malloc0 00:05:55.893 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:55.893 Delay0 00:05:55.893 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.155 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:56.416 NULL1 00:05:56.416 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:56.677 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=701736 00:05:56.677 13:10:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:56.677 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:05:56.677 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.677 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.938 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:56.938 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:56.938 true 00:05:57.199 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:05:57.199 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.199 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.460 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:57.460 13:10:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:57.721 true 00:05:57.721 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:05:57.721 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.721 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.982 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:57.982 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:58.245 true 00:05:58.245 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:05:58.246 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.508 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.508 13:10:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:58.508 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:58.768 true 00:05:58.768 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:05:58.768 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.029 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.029 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:59.029 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:59.290 true 00:05:59.290 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:05:59.290 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.551 13:10:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.551 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:59.551 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:59.811 true 00:05:59.811 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:05:59.811 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.071 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.071 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:00.071 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:00.331 true 00:06:00.331 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:00.331 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.592 13:10:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.592 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:00.592 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:00.852 true 00:06:00.852 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:00.852 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.852 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.114 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:01.114 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:01.376 true 00:06:01.376 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:01.376 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.376 13:10:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.637 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:01.637 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:01.898 true 00:06:01.898 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:01.898 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.160 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.160 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:02.160 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:02.420 true 00:06:02.420 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:02.420 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.681 13:10:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.681 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:02.681 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:02.941 true 00:06:02.941 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:02.941 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.201 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.201 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:03.201 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:03.462 true 00:06:03.462 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:03.462 13:10:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.722 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.722 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:03.722 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:03.982 true 00:06:03.982 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:03.982 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.244 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.244 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:04.244 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:04.505 true 00:06:04.505 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:04.505 13:10:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.765 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.765 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:04.765 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:05.055 true 00:06:05.055 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:05.055 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.316 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.316 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:05.316 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:05.576 true 00:06:05.576 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:05.576 13:10:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.836 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.836 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:05.836 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:06.095 true 00:06:06.095 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:06.095 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.095 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.355 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:06.355 13:10:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:06.614 true 00:06:06.614 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:06.614 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.874 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.874 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:06.874 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:07.134 true 00:06:07.134 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:07.134 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.394 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.394 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:07.394 13:10:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:07.653 true 00:06:07.653 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:07.653 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.913 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.913 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:07.913 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:08.173 true 00:06:08.173 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:08.173 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.433 13:10:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.693 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:08.693 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:08.693 true 00:06:08.693 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:08.693 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.952 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.211 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:09.211 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:09.211 true 00:06:09.211 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:09.211 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.490 13:10:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.750 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:09.750 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:09.750 true 00:06:09.750 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:09.750 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.011 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.272 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:10.272 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:10.272 true 00:06:10.272 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:10.272 13:10:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.533 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.795 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:10.796 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:10.796 true 00:06:11.056 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:11.056 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.056 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.318 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:11.318 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:11.580 true 00:06:11.580 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:11.580 13:10:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.580 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.842 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:11.842 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:12.103 true 00:06:12.103 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:12.103 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.103 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.364 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:12.364 13:10:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:12.624 true 00:06:12.624 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:12.624 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.624 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.885 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:12.885 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:13.144 true 00:06:13.144 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:13.144 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.404 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.404 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:13.404 13:10:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:13.664 true 00:06:13.664 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:13.664 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.924 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.924 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:13.924 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:14.185 true 00:06:14.185 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:14.185 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.446 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.446 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:14.446 13:10:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:14.708 true 00:06:14.708 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:14.708 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.969 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.229 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:15.229 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:15.229 true 00:06:15.229 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:15.230 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.490 13:10:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.751 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:15.751 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:15.751 true 00:06:15.751 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:15.751 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.011 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.272 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:16.272 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:16.272 true 00:06:16.272 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:16.272 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.533 13:10:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.794 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:16.794 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:16.794 true 00:06:16.794 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:16.794 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.056 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.318 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:17.318 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:17.318 true 00:06:17.579 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:17.579 13:10:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.579 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.841 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:17.841 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:17.841 true 00:06:18.102 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:18.102 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.102 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.363 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:18.363 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:18.624 true 00:06:18.624 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:18.624 13:10:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.624 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.884 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:18.884 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:19.143 true 00:06:19.143 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:19.143 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.143 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.404 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:19.404 13:10:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:19.665 true 00:06:19.665 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:19.665 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.665 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.926 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:19.926 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:20.187 true 00:06:20.187 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:20.188 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.448 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.448 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:20.448 13:10:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:20.709 true 00:06:20.709 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:20.709 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.969 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.969 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:20.969 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:21.228 true 00:06:21.228 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:21.228 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.488 13:10:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.748 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:21.748 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:21.748 true 00:06:21.748 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:21.749 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.009 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.269 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:22.269 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:22.269 true 00:06:22.269 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:22.269 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.529 13:10:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.788 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:22.788 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:22.788 true 00:06:23.047 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:23.047 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.047 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.307 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:23.307 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:23.567 true 00:06:23.568 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:23.568 13:10:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.568 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.829 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:23.829 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:24.090 true 00:06:24.090 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:24.090 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.090 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.352 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:24.352 13:10:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:24.611 true 00:06:24.611 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:24.611 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.873 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.873 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:24.873 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:25.133 true 00:06:25.133 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:25.133 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.395 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.395 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:25.395 13:10:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:25.655 true 00:06:25.655 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:25.655 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.915 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.915 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:25.915 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:26.228 true 00:06:26.228 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:26.228 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.498 13:10:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.498 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:26.498 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:26.790 true 00:06:26.790 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:26.790 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.103 Initializing NVMe Controllers 00:06:27.103 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:27.103 Controller IO queue size 128, less than required. 00:06:27.103 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:27.103 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:27.103 Initialization complete. Launching workers. 00:06:27.103 ======================================================== 00:06:27.103 Latency(us) 00:06:27.103 Device Information : IOPS MiB/s Average min max 00:06:27.103 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30699.92 14.99 4169.15 1419.15 8126.47 00:06:27.104 ======================================================== 00:06:27.104 Total : 30699.92 14.99 4169.15 1419.15 8126.47 00:06:27.104 00:06:27.104 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.104 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:06:27.104 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:06:27.365 true 00:06:27.365 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 701736 00:06:27.365 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (701736) - No such process 00:06:27.365 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 701736 00:06:27.365 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.626 13:10:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:27.626 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:27.626 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:27.626 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:27.626 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.626 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:27.887 null0 00:06:27.887 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:27.887 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:27.887 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:28.146 null1 00:06:28.146 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.146 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.146 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:28.146 null2 00:06:28.146 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.146 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.146 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:28.406 null3 00:06:28.406 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.406 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.406 13:10:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:28.666 null4 00:06:28.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:28.666 null5 00:06:28.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.666 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:28.926 null6 00:06:28.926 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:28.926 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.926 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:29.186 null7 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:29.186 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 708350 708351 708353 708355 708357 708359 708361 708363 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.187 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.449 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.449 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.449 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.710 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:29.971 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.232 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.233 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.233 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.233 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.233 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.233 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.495 13:10:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.495 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.495 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.495 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.495 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.495 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.495 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.757 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.019 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.309 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.310 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.568 13:10:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.568 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.569 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.569 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.569 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.569 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.569 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.569 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.827 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.087 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.349 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.609 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.609 13:10:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.609 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.870 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:33.130 rmmod nvme_tcp 00:06:33.130 rmmod nvme_fabrics 00:06:33.130 rmmod nvme_keyring 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 701084 ']' 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 701084 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 701084 ']' 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 701084 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 701084 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:33.130 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:33.131 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 701084' 00:06:33.131 killing process with pid 701084 00:06:33.131 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 701084 00:06:33.131 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 701084 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:33.391 13:10:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.304 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:35.304 00:06:35.304 real 0m50.099s 00:06:35.304 user 3m20.704s 00:06:35.304 sys 0m17.835s 00:06:35.304 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.304 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:35.304 ************************************ 00:06:35.304 END TEST nvmf_ns_hotplug_stress 00:06:35.304 ************************************ 00:06:35.304 13:10:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:35.304 13:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:35.304 13:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.304 13:10:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:35.566 ************************************ 00:06:35.566 START TEST nvmf_delete_subsystem 00:06:35.566 ************************************ 00:06:35.566 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:35.566 * Looking for test storage... 00:06:35.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:35.566 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.566 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.566 13:10:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.566 --rc genhtml_branch_coverage=1 00:06:35.566 --rc genhtml_function_coverage=1 00:06:35.566 --rc genhtml_legend=1 00:06:35.566 --rc geninfo_all_blocks=1 00:06:35.566 --rc geninfo_unexecuted_blocks=1 00:06:35.566 00:06:35.566 ' 00:06:35.566 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.567 --rc genhtml_branch_coverage=1 00:06:35.567 --rc genhtml_function_coverage=1 00:06:35.567 --rc genhtml_legend=1 00:06:35.567 --rc geninfo_all_blocks=1 00:06:35.567 --rc geninfo_unexecuted_blocks=1 00:06:35.567 00:06:35.567 ' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.567 --rc genhtml_branch_coverage=1 00:06:35.567 --rc genhtml_function_coverage=1 00:06:35.567 --rc genhtml_legend=1 00:06:35.567 --rc geninfo_all_blocks=1 00:06:35.567 --rc geninfo_unexecuted_blocks=1 00:06:35.567 00:06:35.567 ' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.567 --rc genhtml_branch_coverage=1 00:06:35.567 --rc genhtml_function_coverage=1 00:06:35.567 --rc genhtml_legend=1 00:06:35.567 --rc geninfo_all_blocks=1 00:06:35.567 --rc geninfo_unexecuted_blocks=1 00:06:35.567 00:06:35.567 ' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:35.567 13:10:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.715 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.716 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.716 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.716 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:06:43.978 Found 0000:31:00.0 (0x8086 - 0x159b) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:06:43.978 Found 0000:31:00.1 (0x8086 - 0x159b) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:06:43.978 Found net devices under 0000:31:00.0: cvl_0_0 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:06:43.978 Found net devices under 0000:31:00.1: cvl_0_1 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:43.978 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:44.240 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:44.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:44.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:06:44.241 00:06:44.241 --- 10.0.0.2 ping statistics --- 00:06:44.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.241 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:44.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:44.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:06:44.241 00:06:44.241 --- 10.0.0.1 ping statistics --- 00:06:44.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:44.241 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=714198 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 714198 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 714198 ']' 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.241 13:11:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.241 [2024-12-05 13:11:06.736556] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:06:44.241 [2024-12-05 13:11:06.736628] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:44.501 [2024-12-05 13:11:06.827901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.501 [2024-12-05 13:11:06.868208] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:44.501 [2024-12-05 13:11:06.868245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:44.501 [2024-12-05 13:11:06.868253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:44.501 [2024-12-05 13:11:06.868259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:44.501 [2024-12-05 13:11:06.868265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:44.502 [2024-12-05 13:11:06.869513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.502 [2024-12-05 13:11:06.869516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.073 [2024-12-05 13:11:07.584310] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.073 [2024-12-05 13:11:07.608505] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:45.073 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.074 NULL1 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.074 Delay0 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:45.074 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:45.335 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:45.335 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=714245 00:06:45.336 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:45.336 13:11:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:45.336 [2024-12-05 13:11:07.705304] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:47.249 13:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:47.249 13:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:47.249 13:11:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 [2024-12-05 13:11:09.829495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170d2c0 is same with the state(6) to be set 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 starting I/O failed: -6 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 [2024-12-05 13:11:09.833642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d8c000c40 is same with the state(6) to be set 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Write completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.510 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Write completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:47.511 Read completed with error (sct=0, sc=8) 00:06:48.450 [2024-12-05 13:11:10.803131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170e5f0 is same with the state(6) to be set 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 [2024-12-05 13:11:10.833251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170d0e0 is same with the state(6) to be set 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 [2024-12-05 13:11:10.833569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170d4a0 is same with the state(6) to be set 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 [2024-12-05 13:11:10.835270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d8c00d020 is same with the state(6) to be set 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Write completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.450 Read completed with error (sct=0, sc=8) 00:06:48.451 Write completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Write completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Write completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 Read completed with error (sct=0, sc=8) 00:06:48.451 [2024-12-05 13:11:10.835423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d8c00d7e0 is same with the state(6) to be set 00:06:48.451 Initializing NVMe Controllers 00:06:48.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.451 Controller IO queue size 128, less than required. 00:06:48.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:48.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:48.451 Initialization complete. Launching workers. 00:06:48.451 ======================================================== 00:06:48.451 Latency(us) 00:06:48.451 Device Information : IOPS MiB/s Average min max 00:06:48.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.84 0.08 895502.02 238.33 1044476.68 00:06:48.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.90 0.07 1072785.95 199.19 2002966.20 00:06:48.451 ======================================================== 00:06:48.451 Total : 322.74 0.16 979493.02 199.19 2002966.20 00:06:48.451 00:06:48.451 [2024-12-05 13:11:10.836108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170e5f0 (9): Bad file descriptor 00:06:48.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:48.451 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.451 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:48.451 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 714245 00:06:48.451 13:11:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 714245 00:06:49.021 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (714245) - No such process 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 714245 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 714245 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 714245 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.021 [2024-12-05 13:11:11.365857] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=715035 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:49.021 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.021 [2024-12-05 13:11:11.456643] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:49.592 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.592 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:49.592 13:11:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.852 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.852 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:49.852 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:50.422 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:50.422 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:50.422 13:11:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:50.993 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:50.993 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:50.993 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:51.569 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:51.569 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:51.569 13:11:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:52.141 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:52.141 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:52.141 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:52.141 Initializing NVMe Controllers 00:06:52.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:52.141 Controller IO queue size 128, less than required. 00:06:52.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:52.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:52.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:52.141 Initialization complete. Launching workers. 00:06:52.141 ======================================================== 00:06:52.141 Latency(us) 00:06:52.141 Device Information : IOPS MiB/s Average min max 00:06:52.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002038.33 1000140.91 1006584.29 00:06:52.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003107.70 1000333.93 1009422.75 00:06:52.141 ======================================================== 00:06:52.141 Total : 256.00 0.12 1002573.02 1000140.91 1009422.75 00:06:52.141 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 715035 00:06:52.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (715035) - No such process 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 715035 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:52.402 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:52.402 rmmod nvme_tcp 00:06:52.402 rmmod nvme_fabrics 00:06:52.402 rmmod nvme_keyring 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 714198 ']' 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 714198 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 714198 ']' 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 714198 00:06:52.672 13:11:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 714198 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 714198' 00:06:52.672 killing process with pid 714198 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 714198 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 714198 00:06:52.672 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.673 13:11:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.225 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:55.226 00:06:55.226 real 0m19.386s 00:06:55.226 user 0m30.927s 00:06:55.226 sys 0m7.556s 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.226 ************************************ 00:06:55.226 END TEST nvmf_delete_subsystem 00:06:55.226 ************************************ 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.226 ************************************ 00:06:55.226 START TEST nvmf_host_management 00:06:55.226 ************************************ 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:55.226 * Looking for test storage... 00:06:55.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.226 --rc genhtml_branch_coverage=1 00:06:55.226 --rc genhtml_function_coverage=1 00:06:55.226 --rc genhtml_legend=1 00:06:55.226 --rc geninfo_all_blocks=1 00:06:55.226 --rc geninfo_unexecuted_blocks=1 00:06:55.226 00:06:55.226 ' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.226 --rc genhtml_branch_coverage=1 00:06:55.226 --rc genhtml_function_coverage=1 00:06:55.226 --rc genhtml_legend=1 00:06:55.226 --rc geninfo_all_blocks=1 00:06:55.226 --rc geninfo_unexecuted_blocks=1 00:06:55.226 00:06:55.226 ' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.226 --rc genhtml_branch_coverage=1 00:06:55.226 --rc genhtml_function_coverage=1 00:06:55.226 --rc genhtml_legend=1 00:06:55.226 --rc geninfo_all_blocks=1 00:06:55.226 --rc geninfo_unexecuted_blocks=1 00:06:55.226 00:06:55.226 ' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:55.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.226 --rc genhtml_branch_coverage=1 00:06:55.226 --rc genhtml_function_coverage=1 00:06:55.226 --rc genhtml_legend=1 00:06:55.226 --rc geninfo_all_blocks=1 00:06:55.226 --rc geninfo_unexecuted_blocks=1 00:06:55.226 00:06:55.226 ' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.226 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.227 13:11:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:03.373 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:03.373 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.373 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:03.374 Found net devices under 0000:31:00.0: cvl_0_0 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:03.374 Found net devices under 0000:31:00.1: cvl_0_1 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:03.374 13:11:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:03.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:03.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:07:03.635 00:07:03.635 --- 10.0.0.2 ping statistics --- 00:07:03.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.635 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:03.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:03.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:07:03.635 00:07:03.635 --- 10.0.0.1 ping statistics --- 00:07:03.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:03.635 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=720626 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 720626 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 720626 ']' 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.635 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:03.635 [2024-12-05 13:11:26.169100] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:07:03.635 [2024-12-05 13:11:26.169149] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.896 [2024-12-05 13:11:26.274543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:03.896 [2024-12-05 13:11:26.318251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:03.896 [2024-12-05 13:11:26.318300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:03.896 [2024-12-05 13:11:26.318308] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.896 [2024-12-05 13:11:26.318316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.896 [2024-12-05 13:11:26.318322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:03.896 [2024-12-05 13:11:26.320438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.896 [2024-12-05 13:11:26.320600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.896 [2024-12-05 13:11:26.320761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.896 [2024-12-05 13:11:26.320761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:04.468 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.468 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:04.468 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:04.468 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.468 13:11:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.468 [2024-12-05 13:11:27.018587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.468 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.730 Malloc0 00:07:04.730 [2024-12-05 13:11:27.096262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=720802 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 720802 /var/tmp/bdevperf.sock 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 720802 ']' 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:04.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:04.730 { 00:07:04.730 "params": { 00:07:04.730 "name": "Nvme$subsystem", 00:07:04.730 "trtype": "$TEST_TRANSPORT", 00:07:04.730 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:04.730 "adrfam": "ipv4", 00:07:04.730 "trsvcid": "$NVMF_PORT", 00:07:04.730 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:04.730 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:04.730 "hdgst": ${hdgst:-false}, 00:07:04.730 "ddgst": ${ddgst:-false} 00:07:04.730 }, 00:07:04.730 "method": "bdev_nvme_attach_controller" 00:07:04.730 } 00:07:04.730 EOF 00:07:04.730 )") 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:04.730 13:11:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:04.730 "params": { 00:07:04.730 "name": "Nvme0", 00:07:04.730 "trtype": "tcp", 00:07:04.730 "traddr": "10.0.0.2", 00:07:04.730 "adrfam": "ipv4", 00:07:04.730 "trsvcid": "4420", 00:07:04.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:04.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:04.730 "hdgst": false, 00:07:04.730 "ddgst": false 00:07:04.730 }, 00:07:04.730 "method": "bdev_nvme_attach_controller" 00:07:04.730 }' 00:07:04.730 [2024-12-05 13:11:27.201418] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:07:04.730 [2024-12-05 13:11:27.201470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid720802 ] 00:07:04.730 [2024-12-05 13:11:27.280305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.991 [2024-12-05 13:11:27.316411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.991 Running I/O for 10 seconds... 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.565 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.566 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.566 [2024-12-05 13:11:28.075364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.566 [2024-12-05 13:11:28.075749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.075857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688000 is same with the state(6) to be set 00:07:05.567 [2024-12-05 13:11:28.076434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.567 [2024-12-05 13:11:28.076867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.567 [2024-12-05 13:11:28.076875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.076884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.076891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.076901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.076908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.076918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.076925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.076934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.076941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.076951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.076959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.076968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.076976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.076985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.076992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.568 [2024-12-05 13:11:28.077404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.568 [2024-12-05 13:11:28.077413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:05.569 [2024-12-05 13:11:28.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222a270 is same with the state(6) to be set 00:07:05.569 [2024-12-05 13:11:28.077663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.569 [2024-12-05 13:11:28.077675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.569 [2024-12-05 13:11:28.077701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.569 [2024-12-05 13:11:28.077716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:05.569 [2024-12-05 13:11:28.077732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:05.569 [2024-12-05 13:11:28.077740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2219b10 is same with the state(6) to be set 00:07:05.569 [2024-12-05 13:11:28.078967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:05.569 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.569 task offset: 106496 on job bdev=Nvme0n1 fails 00:07:05.569 00:07:05.569 Latency(us) 00:07:05.569 [2024-12-05T12:11:28.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.569 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:05.569 Job: Nvme0n1 ended in about 0.58 seconds with error 00:07:05.569 Verification LBA range: start 0x0 length 0x400 00:07:05.569 Nvme0n1 : 0.58 1442.44 90.15 110.96 0.00 40231.42 6062.08 32986.45 00:07:05.569 [2024-12-05T12:11:28.137Z] =================================================================================================================== 00:07:05.569 [2024-12-05T12:11:28.137Z] Total : 1442.44 90.15 110.96 0.00 40231.42 6062.08 32986.45 00:07:05.569 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:05.569 [2024-12-05 13:11:28.080987] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:05.569 [2024-12-05 13:11:28.081012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2219b10 (9): Bad file descriptor 00:07:05.569 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.569 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:05.569 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.569 13:11:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:05.830 [2024-12-05 13:11:28.133124] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 720802 00:07:06.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (720802) - No such process 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:06.772 { 00:07:06.772 "params": { 00:07:06.772 "name": "Nvme$subsystem", 00:07:06.772 "trtype": "$TEST_TRANSPORT", 00:07:06.772 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:06.772 "adrfam": "ipv4", 00:07:06.772 "trsvcid": "$NVMF_PORT", 00:07:06.772 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:06.772 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:06.772 "hdgst": ${hdgst:-false}, 00:07:06.772 "ddgst": ${ddgst:-false} 00:07:06.772 }, 00:07:06.772 "method": "bdev_nvme_attach_controller" 00:07:06.772 } 00:07:06.772 EOF 00:07:06.772 )") 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:06.772 13:11:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:06.772 "params": { 00:07:06.772 "name": "Nvme0", 00:07:06.773 "trtype": "tcp", 00:07:06.773 "traddr": "10.0.0.2", 00:07:06.773 "adrfam": "ipv4", 00:07:06.773 "trsvcid": "4420", 00:07:06.773 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.773 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:06.773 "hdgst": false, 00:07:06.773 "ddgst": false 00:07:06.773 }, 00:07:06.773 "method": "bdev_nvme_attach_controller" 00:07:06.773 }' 00:07:06.773 [2024-12-05 13:11:29.150834] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:07:06.773 [2024-12-05 13:11:29.150890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid721297 ] 00:07:06.773 [2024-12-05 13:11:29.229169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.773 [2024-12-05 13:11:29.265093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.033 Running I/O for 1 seconds... 00:07:08.419 1598.00 IOPS, 99.88 MiB/s 00:07:08.419 Latency(us) 00:07:08.419 [2024-12-05T12:11:30.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.419 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:08.419 Verification LBA range: start 0x0 length 0x400 00:07:08.419 Nvme0n1 : 1.03 1621.82 101.36 0.00 0.00 38777.47 6307.84 32112.64 00:07:08.419 [2024-12-05T12:11:30.987Z] =================================================================================================================== 00:07:08.419 [2024-12-05T12:11:30.987Z] Total : 1621.82 101.36 0.00 0.00 38777.47 6307.84 32112.64 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.419 rmmod nvme_tcp 00:07:08.419 rmmod nvme_fabrics 00:07:08.419 rmmod nvme_keyring 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 720626 ']' 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 720626 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 720626 ']' 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 720626 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 720626 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 720626' 00:07:08.419 killing process with pid 720626 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 720626 00:07:08.419 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 720626 00:07:08.419 [2024-12-05 13:11:30.971742] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:08.680 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:08.680 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:08.680 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:08.680 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:08.680 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:08.680 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:08.680 13:11:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:08.680 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:08.680 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:08.681 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.681 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:08.681 13:11:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.596 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:10.597 00:07:10.597 real 0m15.754s 00:07:10.597 user 0m23.687s 00:07:10.597 sys 0m7.559s 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:10.597 ************************************ 00:07:10.597 END TEST nvmf_host_management 00:07:10.597 ************************************ 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.597 ************************************ 00:07:10.597 START TEST nvmf_lvol 00:07:10.597 ************************************ 00:07:10.597 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:10.858 * Looking for test storage... 00:07:10.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:10.858 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:10.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.859 --rc genhtml_branch_coverage=1 00:07:10.859 --rc genhtml_function_coverage=1 00:07:10.859 --rc genhtml_legend=1 00:07:10.859 --rc geninfo_all_blocks=1 00:07:10.859 --rc geninfo_unexecuted_blocks=1 00:07:10.859 00:07:10.859 ' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:10.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.859 --rc genhtml_branch_coverage=1 00:07:10.859 --rc genhtml_function_coverage=1 00:07:10.859 --rc genhtml_legend=1 00:07:10.859 --rc geninfo_all_blocks=1 00:07:10.859 --rc geninfo_unexecuted_blocks=1 00:07:10.859 00:07:10.859 ' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:10.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.859 --rc genhtml_branch_coverage=1 00:07:10.859 --rc genhtml_function_coverage=1 00:07:10.859 --rc genhtml_legend=1 00:07:10.859 --rc geninfo_all_blocks=1 00:07:10.859 --rc geninfo_unexecuted_blocks=1 00:07:10.859 00:07:10.859 ' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:10.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.859 --rc genhtml_branch_coverage=1 00:07:10.859 --rc genhtml_function_coverage=1 00:07:10.859 --rc genhtml_legend=1 00:07:10.859 --rc geninfo_all_blocks=1 00:07:10.859 --rc geninfo_unexecuted_blocks=1 00:07:10.859 00:07:10.859 ' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:10.859 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:10.860 13:11:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:19.029 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:19.029 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:19.029 Found net devices under 0000:31:00.0: cvl_0_0 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:19.029 Found net devices under 0000:31:00.1: cvl_0_1 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:19.029 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.030 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:19.290 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.290 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:07:19.290 00:07:19.290 --- 10.0.0.2 ping statistics --- 00:07:19.290 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.290 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:07:19.290 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.290 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.290 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:07:19.291 00:07:19.291 --- 10.0.0.1 ping statistics --- 00:07:19.291 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.291 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.291 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=726408 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 726408 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 726408 ']' 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.551 13:11:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:19.551 [2024-12-05 13:11:41.930708] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:07:19.551 [2024-12-05 13:11:41.930774] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.551 [2024-12-05 13:11:42.022218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.551 [2024-12-05 13:11:42.063141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.551 [2024-12-05 13:11:42.063177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.551 [2024-12-05 13:11:42.063184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.551 [2024-12-05 13:11:42.063191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.551 [2024-12-05 13:11:42.063197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.551 [2024-12-05 13:11:42.064608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.551 [2024-12-05 13:11:42.064726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.551 [2024-12-05 13:11:42.064728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:20.494 [2024-12-05 13:11:42.936660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.494 13:11:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:20.755 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:20.755 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:21.017 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:21.017 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:21.017 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:21.278 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=c15d767f-09a8-4b0e-946d-814ca378f857 00:07:21.278 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c15d767f-09a8-4b0e-946d-814ca378f857 lvol 20 00:07:21.539 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a049b96d-0c58-4a81-b0ca-709589f1220c 00:07:21.539 13:11:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:21.800 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a049b96d-0c58-4a81-b0ca-709589f1220c 00:07:21.800 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:22.061 [2024-12-05 13:11:44.453151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.061 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:22.322 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=727013 00:07:22.322 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:22.322 13:11:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:23.359 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a049b96d-0c58-4a81-b0ca-709589f1220c MY_SNAPSHOT 00:07:23.359 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=c1883cf3-403d-49eb-bcb9-b6a5675cef56 00:07:23.359 13:11:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a049b96d-0c58-4a81-b0ca-709589f1220c 30 00:07:23.629 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone c1883cf3-403d-49eb-bcb9-b6a5675cef56 MY_CLONE 00:07:23.898 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=cf9a643e-ec48-42cb-b329-80afceb7e9b1 00:07:23.898 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate cf9a643e-ec48-42cb-b329-80afceb7e9b1 00:07:24.469 13:11:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 727013 00:07:32.607 Initializing NVMe Controllers 00:07:32.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:32.607 Controller IO queue size 128, less than required. 00:07:32.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:32.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:32.607 Initialization complete. Launching workers. 00:07:32.607 ======================================================== 00:07:32.607 Latency(us) 00:07:32.607 Device Information : IOPS MiB/s Average min max 00:07:32.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17307.30 67.61 7397.54 487.44 51790.69 00:07:32.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12253.00 47.86 10448.21 3752.44 64546.67 00:07:32.607 ======================================================== 00:07:32.607 Total : 29560.30 115.47 8662.07 487.44 64546.67 00:07:32.607 00:07:32.607 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:32.867 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a049b96d-0c58-4a81-b0ca-709589f1220c 00:07:32.867 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c15d767f-09a8-4b0e-946d-814ca378f857 00:07:33.127 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:33.127 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:33.127 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:33.127 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:33.127 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:33.128 rmmod nvme_tcp 00:07:33.128 rmmod nvme_fabrics 00:07:33.128 rmmod nvme_keyring 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 726408 ']' 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 726408 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 726408 ']' 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 726408 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.128 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 726408 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 726408' 00:07:33.388 killing process with pid 726408 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 726408 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 726408 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.388 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.389 13:11:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.933 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.933 00:07:35.933 real 0m24.802s 00:07:35.933 user 1m4.754s 00:07:35.933 sys 0m9.240s 00:07:35.933 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.933 13:11:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:35.933 ************************************ 00:07:35.933 END TEST nvmf_lvol 00:07:35.933 ************************************ 00:07:35.933 13:11:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.933 13:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.933 13:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.933 13:11:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.933 ************************************ 00:07:35.933 START TEST nvmf_lvs_grow 00:07:35.933 ************************************ 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:35.933 * Looking for test storage... 00:07:35.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.933 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.934 --rc genhtml_branch_coverage=1 00:07:35.934 --rc genhtml_function_coverage=1 00:07:35.934 --rc genhtml_legend=1 00:07:35.934 --rc geninfo_all_blocks=1 00:07:35.934 --rc geninfo_unexecuted_blocks=1 00:07:35.934 00:07:35.934 ' 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.934 --rc genhtml_branch_coverage=1 00:07:35.934 --rc genhtml_function_coverage=1 00:07:35.934 --rc genhtml_legend=1 00:07:35.934 --rc geninfo_all_blocks=1 00:07:35.934 --rc geninfo_unexecuted_blocks=1 00:07:35.934 00:07:35.934 ' 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.934 --rc genhtml_branch_coverage=1 00:07:35.934 --rc genhtml_function_coverage=1 00:07:35.934 --rc genhtml_legend=1 00:07:35.934 --rc geninfo_all_blocks=1 00:07:35.934 --rc geninfo_unexecuted_blocks=1 00:07:35.934 00:07:35.934 ' 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.934 --rc genhtml_branch_coverage=1 00:07:35.934 --rc genhtml_function_coverage=1 00:07:35.934 --rc genhtml_legend=1 00:07:35.934 --rc geninfo_all_blocks=1 00:07:35.934 --rc geninfo_unexecuted_blocks=1 00:07:35.934 00:07:35.934 ' 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.934 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.935 13:11:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:44.079 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:44.079 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:44.079 Found net devices under 0000:31:00.0: cvl_0_0 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:44.079 Found net devices under 0000:31:00.1: cvl_0_1 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:44.079 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:44.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:44.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:07:44.079 00:07:44.079 --- 10.0.0.2 ping statistics --- 00:07:44.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.079 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:44.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:44.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:07:44.080 00:07:44.080 --- 10.0.0.1 ping statistics --- 00:07:44.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:44.080 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:44.080 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=734007 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 734007 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 734007 ']' 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.342 13:12:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:44.342 [2024-12-05 13:12:06.701561] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:07:44.342 [2024-12-05 13:12:06.701614] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.342 [2024-12-05 13:12:06.785599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.342 [2024-12-05 13:12:06.820371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.342 [2024-12-05 13:12:06.820404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.342 [2024-12-05 13:12:06.820412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.342 [2024-12-05 13:12:06.820418] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.342 [2024-12-05 13:12:06.820424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.342 [2024-12-05 13:12:06.820981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:45.286 [2024-12-05 13:12:07.674593] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:45.286 ************************************ 00:07:45.286 START TEST lvs_grow_clean 00:07:45.286 ************************************ 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.286 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.549 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:45.549 13:12:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:45.549 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=67c91651-0e48-4833-a237-7b4802bc67de 00:07:45.549 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:45.549 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:45.809 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:45.809 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:45.809 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 67c91651-0e48-4833-a237-7b4802bc67de lvol 150 00:07:46.071 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=be4c5da7-751d-4f65-bd56-49cf0723ed73 00:07:46.071 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.071 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:46.071 [2024-12-05 13:12:08.617588] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:46.071 [2024-12-05 13:12:08.617639] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:46.071 true 00:07:46.071 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:46.071 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:46.331 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:46.331 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:46.592 13:12:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be4c5da7-751d-4f65-bd56-49cf0723ed73 00:07:46.592 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:46.852 [2024-12-05 13:12:09.275627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.852 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=734998 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 734998 /var/tmp/bdevperf.sock 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 734998 ']' 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.113 13:12:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:47.113 [2024-12-05 13:12:09.512060] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:07:47.113 [2024-12-05 13:12:09.512112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid734998 ] 00:07:47.113 [2024-12-05 13:12:09.607618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.113 [2024-12-05 13:12:09.643357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.055 13:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.055 13:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:48.055 13:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:48.317 Nvme0n1 00:07:48.317 13:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:48.317 [ 00:07:48.317 { 00:07:48.317 "name": "Nvme0n1", 00:07:48.317 "aliases": [ 00:07:48.317 "be4c5da7-751d-4f65-bd56-49cf0723ed73" 00:07:48.317 ], 00:07:48.317 "product_name": "NVMe disk", 00:07:48.317 "block_size": 4096, 00:07:48.317 "num_blocks": 38912, 00:07:48.317 "uuid": "be4c5da7-751d-4f65-bd56-49cf0723ed73", 00:07:48.317 "numa_id": 0, 00:07:48.317 "assigned_rate_limits": { 00:07:48.317 "rw_ios_per_sec": 0, 00:07:48.317 "rw_mbytes_per_sec": 0, 00:07:48.317 "r_mbytes_per_sec": 0, 00:07:48.317 "w_mbytes_per_sec": 0 00:07:48.317 }, 00:07:48.317 "claimed": false, 00:07:48.317 "zoned": false, 00:07:48.317 "supported_io_types": { 00:07:48.317 "read": true, 00:07:48.317 "write": true, 00:07:48.317 "unmap": true, 00:07:48.317 "flush": true, 00:07:48.317 "reset": true, 00:07:48.317 "nvme_admin": true, 00:07:48.317 "nvme_io": true, 00:07:48.317 "nvme_io_md": false, 00:07:48.317 "write_zeroes": true, 00:07:48.317 "zcopy": false, 00:07:48.317 "get_zone_info": false, 00:07:48.317 "zone_management": false, 00:07:48.317 "zone_append": false, 00:07:48.317 "compare": true, 00:07:48.317 "compare_and_write": true, 00:07:48.317 "abort": true, 00:07:48.317 "seek_hole": false, 00:07:48.317 "seek_data": false, 00:07:48.317 "copy": true, 00:07:48.317 "nvme_iov_md": false 00:07:48.317 }, 00:07:48.317 "memory_domains": [ 00:07:48.317 { 00:07:48.317 "dma_device_id": "system", 00:07:48.317 "dma_device_type": 1 00:07:48.317 } 00:07:48.317 ], 00:07:48.317 "driver_specific": { 00:07:48.317 "nvme": [ 00:07:48.317 { 00:07:48.317 "trid": { 00:07:48.317 "trtype": "TCP", 00:07:48.317 "adrfam": "IPv4", 00:07:48.317 "traddr": "10.0.0.2", 00:07:48.317 "trsvcid": "4420", 00:07:48.317 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:48.317 }, 00:07:48.317 "ctrlr_data": { 00:07:48.317 "cntlid": 1, 00:07:48.317 "vendor_id": "0x8086", 00:07:48.317 "model_number": "SPDK bdev Controller", 00:07:48.317 "serial_number": "SPDK0", 00:07:48.317 "firmware_revision": "25.01", 00:07:48.317 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:48.317 "oacs": { 00:07:48.317 "security": 0, 00:07:48.317 "format": 0, 00:07:48.317 "firmware": 0, 00:07:48.317 "ns_manage": 0 00:07:48.317 }, 00:07:48.317 "multi_ctrlr": true, 00:07:48.317 "ana_reporting": false 00:07:48.317 }, 00:07:48.317 "vs": { 00:07:48.317 "nvme_version": "1.3" 00:07:48.317 }, 00:07:48.317 "ns_data": { 00:07:48.317 "id": 1, 00:07:48.317 "can_share": true 00:07:48.317 } 00:07:48.317 } 00:07:48.317 ], 00:07:48.317 "mp_policy": "active_passive" 00:07:48.317 } 00:07:48.317 } 00:07:48.317 ] 00:07:48.317 13:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=735459 00:07:48.317 13:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:48.317 13:12:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:48.578 Running I/O for 10 seconds... 00:07:49.533 Latency(us) 00:07:49.533 [2024-12-05T12:12:12.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.533 Nvme0n1 : 1.00 17912.00 69.97 0.00 0.00 0.00 0.00 0.00 00:07:49.533 [2024-12-05T12:12:12.101Z] =================================================================================================================== 00:07:49.533 [2024-12-05T12:12:12.101Z] Total : 17912.00 69.97 0.00 0.00 0.00 0.00 0.00 00:07:49.533 00:07:50.476 13:12:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:50.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.476 Nvme0n1 : 2.00 18038.00 70.46 0.00 0.00 0.00 0.00 0.00 00:07:50.476 [2024-12-05T12:12:13.044Z] =================================================================================================================== 00:07:50.476 [2024-12-05T12:12:13.044Z] Total : 18038.00 70.46 0.00 0.00 0.00 0.00 0.00 00:07:50.476 00:07:50.476 true 00:07:50.476 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:50.476 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:50.737 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:50.737 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:50.737 13:12:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 735459 00:07:51.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.679 Nvme0n1 : 3.00 18076.00 70.61 0.00 0.00 0.00 0.00 0.00 00:07:51.679 [2024-12-05T12:12:14.247Z] =================================================================================================================== 00:07:51.679 [2024-12-05T12:12:14.247Z] Total : 18076.00 70.61 0.00 0.00 0.00 0.00 0.00 00:07:51.679 00:07:52.628 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.628 Nvme0n1 : 4.00 18123.25 70.79 0.00 0.00 0.00 0.00 0.00 00:07:52.628 [2024-12-05T12:12:15.196Z] =================================================================================================================== 00:07:52.628 [2024-12-05T12:12:15.196Z] Total : 18123.25 70.79 0.00 0.00 0.00 0.00 0.00 00:07:52.628 00:07:53.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.575 Nvme0n1 : 5.00 18155.80 70.92 0.00 0.00 0.00 0.00 0.00 00:07:53.575 [2024-12-05T12:12:16.143Z] =================================================================================================================== 00:07:53.575 [2024-12-05T12:12:16.143Z] Total : 18155.80 70.92 0.00 0.00 0.00 0.00 0.00 00:07:53.575 00:07:54.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.517 Nvme0n1 : 6.00 18186.83 71.04 0.00 0.00 0.00 0.00 0.00 00:07:54.517 [2024-12-05T12:12:17.085Z] =================================================================================================================== 00:07:54.517 [2024-12-05T12:12:17.085Z] Total : 18186.83 71.04 0.00 0.00 0.00 0.00 0.00 00:07:54.517 00:07:55.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.457 Nvme0n1 : 7.00 18195.29 71.08 0.00 0.00 0.00 0.00 0.00 00:07:55.457 [2024-12-05T12:12:18.025Z] =================================================================================================================== 00:07:55.457 [2024-12-05T12:12:18.025Z] Total : 18195.29 71.08 0.00 0.00 0.00 0.00 0.00 00:07:55.457 00:07:56.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.398 Nvme0n1 : 8.00 18209.25 71.13 0.00 0.00 0.00 0.00 0.00 00:07:56.398 [2024-12-05T12:12:18.966Z] =================================================================================================================== 00:07:56.398 [2024-12-05T12:12:18.966Z] Total : 18209.25 71.13 0.00 0.00 0.00 0.00 0.00 00:07:56.398 00:07:57.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.801 Nvme0n1 : 9.00 18223.56 71.19 0.00 0.00 0.00 0.00 0.00 00:07:57.801 [2024-12-05T12:12:20.369Z] =================================================================================================================== 00:07:57.801 [2024-12-05T12:12:20.369Z] Total : 18223.56 71.19 0.00 0.00 0.00 0.00 0.00 00:07:57.801 00:07:58.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.740 Nvme0n1 : 10.00 18224.50 71.19 0.00 0.00 0.00 0.00 0.00 00:07:58.740 [2024-12-05T12:12:21.308Z] =================================================================================================================== 00:07:58.740 [2024-12-05T12:12:21.308Z] Total : 18224.50 71.19 0.00 0.00 0.00 0.00 0.00 00:07:58.740 00:07:58.740 00:07:58.740 Latency(us) 00:07:58.740 [2024-12-05T12:12:21.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.740 Nvme0n1 : 10.00 18231.49 71.22 0.00 0.00 7017.70 3454.29 12779.52 00:07:58.740 [2024-12-05T12:12:21.308Z] =================================================================================================================== 00:07:58.740 [2024-12-05T12:12:21.308Z] Total : 18231.49 71.22 0.00 0.00 7017.70 3454.29 12779.52 00:07:58.740 { 00:07:58.740 "results": [ 00:07:58.740 { 00:07:58.740 "job": "Nvme0n1", 00:07:58.740 "core_mask": "0x2", 00:07:58.740 "workload": "randwrite", 00:07:58.740 "status": "finished", 00:07:58.740 "queue_depth": 128, 00:07:58.740 "io_size": 4096, 00:07:58.740 "runtime": 10.003185, 00:07:58.740 "iops": 18231.493269393697, 00:07:58.740 "mibps": 71.21677058356913, 00:07:58.740 "io_failed": 0, 00:07:58.740 "io_timeout": 0, 00:07:58.740 "avg_latency_us": 7017.70316965779, 00:07:58.740 "min_latency_us": 3454.2933333333335, 00:07:58.740 "max_latency_us": 12779.52 00:07:58.740 } 00:07:58.740 ], 00:07:58.740 "core_count": 1 00:07:58.740 } 00:07:58.740 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 734998 00:07:58.740 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 734998 ']' 00:07:58.740 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 734998 00:07:58.740 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:58.740 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.740 13:12:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 734998 00:07:58.740 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:58.740 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:58.740 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 734998' 00:07:58.740 killing process with pid 734998 00:07:58.740 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 734998 00:07:58.740 Received shutdown signal, test time was about 10.000000 seconds 00:07:58.740 00:07:58.740 Latency(us) 00:07:58.740 [2024-12-05T12:12:21.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.740 [2024-12-05T12:12:21.308Z] =================================================================================================================== 00:07:58.740 [2024-12-05T12:12:21.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:58.740 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 734998 00:07:58.740 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.000 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.000 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:59.000 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:59.259 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:59.259 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:59.259 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:59.518 [2024-12-05 13:12:21.910663] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:59.518 13:12:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:07:59.836 request: 00:07:59.836 { 00:07:59.836 "uuid": "67c91651-0e48-4833-a237-7b4802bc67de", 00:07:59.836 "method": "bdev_lvol_get_lvstores", 00:07:59.836 "req_id": 1 00:07:59.836 } 00:07:59.836 Got JSON-RPC error response 00:07:59.836 response: 00:07:59.836 { 00:07:59.836 "code": -19, 00:07:59.836 "message": "No such device" 00:07:59.836 } 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:59.836 aio_bdev 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev be4c5da7-751d-4f65-bd56-49cf0723ed73 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=be4c5da7-751d-4f65-bd56-49cf0723ed73 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.836 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:00.096 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b be4c5da7-751d-4f65-bd56-49cf0723ed73 -t 2000 00:08:00.096 [ 00:08:00.096 { 00:08:00.096 "name": "be4c5da7-751d-4f65-bd56-49cf0723ed73", 00:08:00.096 "aliases": [ 00:08:00.096 "lvs/lvol" 00:08:00.096 ], 00:08:00.096 "product_name": "Logical Volume", 00:08:00.096 "block_size": 4096, 00:08:00.096 "num_blocks": 38912, 00:08:00.096 "uuid": "be4c5da7-751d-4f65-bd56-49cf0723ed73", 00:08:00.096 "assigned_rate_limits": { 00:08:00.096 "rw_ios_per_sec": 0, 00:08:00.096 "rw_mbytes_per_sec": 0, 00:08:00.096 "r_mbytes_per_sec": 0, 00:08:00.096 "w_mbytes_per_sec": 0 00:08:00.096 }, 00:08:00.096 "claimed": false, 00:08:00.096 "zoned": false, 00:08:00.096 "supported_io_types": { 00:08:00.096 "read": true, 00:08:00.096 "write": true, 00:08:00.096 "unmap": true, 00:08:00.096 "flush": false, 00:08:00.096 "reset": true, 00:08:00.096 "nvme_admin": false, 00:08:00.096 "nvme_io": false, 00:08:00.096 "nvme_io_md": false, 00:08:00.096 "write_zeroes": true, 00:08:00.096 "zcopy": false, 00:08:00.096 "get_zone_info": false, 00:08:00.096 "zone_management": false, 00:08:00.096 "zone_append": false, 00:08:00.096 "compare": false, 00:08:00.096 "compare_and_write": false, 00:08:00.096 "abort": false, 00:08:00.096 "seek_hole": true, 00:08:00.096 "seek_data": true, 00:08:00.096 "copy": false, 00:08:00.096 "nvme_iov_md": false 00:08:00.096 }, 00:08:00.096 "driver_specific": { 00:08:00.096 "lvol": { 00:08:00.096 "lvol_store_uuid": "67c91651-0e48-4833-a237-7b4802bc67de", 00:08:00.096 "base_bdev": "aio_bdev", 00:08:00.096 "thin_provision": false, 00:08:00.096 "num_allocated_clusters": 38, 00:08:00.096 "snapshot": false, 00:08:00.096 "clone": false, 00:08:00.096 "esnap_clone": false 00:08:00.096 } 00:08:00.096 } 00:08:00.096 } 00:08:00.096 ] 00:08:00.096 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:00.096 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:08:00.096 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:00.357 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:00.357 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 67c91651-0e48-4833-a237-7b4802bc67de 00:08:00.357 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:00.616 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:00.616 13:12:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be4c5da7-751d-4f65-bd56-49cf0723ed73 00:08:00.616 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 67c91651-0e48-4833-a237-7b4802bc67de 00:08:00.876 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.136 00:08:01.136 real 0m15.791s 00:08:01.136 user 0m15.519s 00:08:01.136 sys 0m1.361s 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:01.136 ************************************ 00:08:01.136 END TEST lvs_grow_clean 00:08:01.136 ************************************ 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:01.136 ************************************ 00:08:01.136 START TEST lvs_grow_dirty 00:08:01.136 ************************************ 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.136 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.395 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:01.395 13:12:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:01.655 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:01.655 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:01.655 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:01.655 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:01.655 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:01.655 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 lvol 150 00:08:01.915 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8fa78c7a-04c8-4067-8e0d-f8c352c67e6a 00:08:01.915 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:01.915 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:02.174 [2024-12-05 13:12:24.503076] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:02.175 [2024-12-05 13:12:24.503126] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:02.175 true 00:08:02.175 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:02.175 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:02.175 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:02.175 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:02.435 13:12:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8fa78c7a-04c8-4067-8e0d-f8c352c67e6a 00:08:02.695 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:02.695 [2024-12-05 13:12:25.165093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:02.695 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=738264 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 738264 /var/tmp/bdevperf.sock 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 738264 ']' 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:02.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.955 13:12:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:02.955 [2024-12-05 13:12:25.399720] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:02.955 [2024-12-05 13:12:25.399772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid738264 ] 00:08:02.955 [2024-12-05 13:12:25.489280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.955 [2024-12-05 13:12:25.519201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.894 13:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.894 13:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:03.894 13:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:04.154 Nvme0n1 00:08:04.154 13:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:04.414 [ 00:08:04.414 { 00:08:04.414 "name": "Nvme0n1", 00:08:04.414 "aliases": [ 00:08:04.414 "8fa78c7a-04c8-4067-8e0d-f8c352c67e6a" 00:08:04.414 ], 00:08:04.414 "product_name": "NVMe disk", 00:08:04.414 "block_size": 4096, 00:08:04.414 "num_blocks": 38912, 00:08:04.414 "uuid": "8fa78c7a-04c8-4067-8e0d-f8c352c67e6a", 00:08:04.414 "numa_id": 0, 00:08:04.414 "assigned_rate_limits": { 00:08:04.414 "rw_ios_per_sec": 0, 00:08:04.414 "rw_mbytes_per_sec": 0, 00:08:04.414 "r_mbytes_per_sec": 0, 00:08:04.414 "w_mbytes_per_sec": 0 00:08:04.414 }, 00:08:04.414 "claimed": false, 00:08:04.414 "zoned": false, 00:08:04.414 "supported_io_types": { 00:08:04.414 "read": true, 00:08:04.414 "write": true, 00:08:04.414 "unmap": true, 00:08:04.414 "flush": true, 00:08:04.414 "reset": true, 00:08:04.414 "nvme_admin": true, 00:08:04.414 "nvme_io": true, 00:08:04.414 "nvme_io_md": false, 00:08:04.414 "write_zeroes": true, 00:08:04.414 "zcopy": false, 00:08:04.414 "get_zone_info": false, 00:08:04.414 "zone_management": false, 00:08:04.414 "zone_append": false, 00:08:04.414 "compare": true, 00:08:04.414 "compare_and_write": true, 00:08:04.414 "abort": true, 00:08:04.414 "seek_hole": false, 00:08:04.414 "seek_data": false, 00:08:04.414 "copy": true, 00:08:04.414 "nvme_iov_md": false 00:08:04.414 }, 00:08:04.414 "memory_domains": [ 00:08:04.414 { 00:08:04.414 "dma_device_id": "system", 00:08:04.414 "dma_device_type": 1 00:08:04.414 } 00:08:04.414 ], 00:08:04.414 "driver_specific": { 00:08:04.414 "nvme": [ 00:08:04.414 { 00:08:04.414 "trid": { 00:08:04.414 "trtype": "TCP", 00:08:04.414 "adrfam": "IPv4", 00:08:04.414 "traddr": "10.0.0.2", 00:08:04.414 "trsvcid": "4420", 00:08:04.414 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:04.414 }, 00:08:04.414 "ctrlr_data": { 00:08:04.414 "cntlid": 1, 00:08:04.414 "vendor_id": "0x8086", 00:08:04.414 "model_number": "SPDK bdev Controller", 00:08:04.414 "serial_number": "SPDK0", 00:08:04.414 "firmware_revision": "25.01", 00:08:04.414 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:04.414 "oacs": { 00:08:04.414 "security": 0, 00:08:04.414 "format": 0, 00:08:04.414 "firmware": 0, 00:08:04.414 "ns_manage": 0 00:08:04.414 }, 00:08:04.414 "multi_ctrlr": true, 00:08:04.414 "ana_reporting": false 00:08:04.414 }, 00:08:04.414 "vs": { 00:08:04.414 "nvme_version": "1.3" 00:08:04.414 }, 00:08:04.414 "ns_data": { 00:08:04.414 "id": 1, 00:08:04.414 "can_share": true 00:08:04.414 } 00:08:04.414 } 00:08:04.414 ], 00:08:04.414 "mp_policy": "active_passive" 00:08:04.414 } 00:08:04.414 } 00:08:04.414 ] 00:08:04.414 13:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=738562 00:08:04.414 13:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:04.414 13:12:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:04.414 Running I/O for 10 seconds... 00:08:05.367 Latency(us) 00:08:05.367 [2024-12-05T12:12:27.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.367 Nvme0n1 : 1.00 17978.00 70.23 0.00 0.00 0.00 0.00 0.00 00:08:05.367 [2024-12-05T12:12:27.935Z] =================================================================================================================== 00:08:05.367 [2024-12-05T12:12:27.935Z] Total : 17978.00 70.23 0.00 0.00 0.00 0.00 0.00 00:08:05.367 00:08:06.307 13:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:06.567 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.567 Nvme0n1 : 2.00 18103.00 70.71 0.00 0.00 0.00 0.00 0.00 00:08:06.567 [2024-12-05T12:12:29.135Z] =================================================================================================================== 00:08:06.567 [2024-12-05T12:12:29.135Z] Total : 18103.00 70.71 0.00 0.00 0.00 0.00 0.00 00:08:06.567 00:08:06.567 true 00:08:06.567 13:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:06.567 13:12:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:06.828 13:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:06.828 13:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:06.828 13:12:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 738562 00:08:07.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.398 Nvme0n1 : 3.00 18140.67 70.86 0.00 0.00 0.00 0.00 0.00 00:08:07.398 [2024-12-05T12:12:29.966Z] =================================================================================================================== 00:08:07.398 [2024-12-05T12:12:29.966Z] Total : 18140.67 70.86 0.00 0.00 0.00 0.00 0.00 00:08:07.398 00:08:08.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.339 Nvme0n1 : 4.00 18158.75 70.93 0.00 0.00 0.00 0.00 0.00 00:08:08.339 [2024-12-05T12:12:30.907Z] =================================================================================================================== 00:08:08.339 [2024-12-05T12:12:30.907Z] Total : 18158.75 70.93 0.00 0.00 0.00 0.00 0.00 00:08:08.339 00:08:09.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.721 Nvme0n1 : 5.00 18173.20 70.99 0.00 0.00 0.00 0.00 0.00 00:08:09.721 [2024-12-05T12:12:32.289Z] =================================================================================================================== 00:08:09.721 [2024-12-05T12:12:32.289Z] Total : 18173.20 70.99 0.00 0.00 0.00 0.00 0.00 00:08:09.721 00:08:10.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:10.663 Nvme0n1 : 6.00 18201.50 71.10 0.00 0.00 0.00 0.00 0.00 00:08:10.663 [2024-12-05T12:12:33.231Z] =================================================================================================================== 00:08:10.663 [2024-12-05T12:12:33.231Z] Total : 18201.50 71.10 0.00 0.00 0.00 0.00 0.00 00:08:10.663 00:08:11.603 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:11.603 Nvme0n1 : 7.00 18216.43 71.16 0.00 0.00 0.00 0.00 0.00 00:08:11.603 [2024-12-05T12:12:34.171Z] =================================================================================================================== 00:08:11.603 [2024-12-05T12:12:34.171Z] Total : 18216.43 71.16 0.00 0.00 0.00 0.00 0.00 00:08:11.603 00:08:12.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.544 Nvme0n1 : 8.00 18237.75 71.24 0.00 0.00 0.00 0.00 0.00 00:08:12.544 [2024-12-05T12:12:35.112Z] =================================================================================================================== 00:08:12.544 [2024-12-05T12:12:35.112Z] Total : 18237.75 71.24 0.00 0.00 0.00 0.00 0.00 00:08:12.544 00:08:13.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.484 Nvme0n1 : 9.00 18253.67 71.30 0.00 0.00 0.00 0.00 0.00 00:08:13.484 [2024-12-05T12:12:36.052Z] =================================================================================================================== 00:08:13.484 [2024-12-05T12:12:36.052Z] Total : 18253.67 71.30 0.00 0.00 0.00 0.00 0.00 00:08:13.484 00:08:14.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.424 Nvme0n1 : 10.00 18262.80 71.34 0.00 0.00 0.00 0.00 0.00 00:08:14.424 [2024-12-05T12:12:36.992Z] =================================================================================================================== 00:08:14.424 [2024-12-05T12:12:36.992Z] Total : 18262.80 71.34 0.00 0.00 0.00 0.00 0.00 00:08:14.424 00:08:14.424 00:08:14.424 Latency(us) 00:08:14.424 [2024-12-05T12:12:36.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.424 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.424 Nvme0n1 : 10.00 18260.46 71.33 0.00 0.00 7006.58 1727.15 12670.29 00:08:14.424 [2024-12-05T12:12:36.992Z] =================================================================================================================== 00:08:14.424 [2024-12-05T12:12:36.992Z] Total : 18260.46 71.33 0.00 0.00 7006.58 1727.15 12670.29 00:08:14.424 { 00:08:14.424 "results": [ 00:08:14.424 { 00:08:14.424 "job": "Nvme0n1", 00:08:14.424 "core_mask": "0x2", 00:08:14.424 "workload": "randwrite", 00:08:14.424 "status": "finished", 00:08:14.424 "queue_depth": 128, 00:08:14.424 "io_size": 4096, 00:08:14.424 "runtime": 10.004842, 00:08:14.424 "iops": 18260.45828609787, 00:08:14.424 "mibps": 71.3299151800698, 00:08:14.424 "io_failed": 0, 00:08:14.424 "io_timeout": 0, 00:08:14.424 "avg_latency_us": 7006.583821091484, 00:08:14.424 "min_latency_us": 1727.1466666666668, 00:08:14.424 "max_latency_us": 12670.293333333333 00:08:14.424 } 00:08:14.424 ], 00:08:14.424 "core_count": 1 00:08:14.424 } 00:08:14.424 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 738264 00:08:14.424 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 738264 ']' 00:08:14.424 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 738264 00:08:14.424 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:14.424 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.424 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 738264 00:08:14.685 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:14.685 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:14.685 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 738264' 00:08:14.685 killing process with pid 738264 00:08:14.685 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 738264 00:08:14.685 Received shutdown signal, test time was about 10.000000 seconds 00:08:14.685 00:08:14.685 Latency(us) 00:08:14.685 [2024-12-05T12:12:37.253Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.685 [2024-12-05T12:12:37.253Z] =================================================================================================================== 00:08:14.685 [2024-12-05T12:12:37.253Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:14.685 13:12:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 738264 00:08:14.685 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.965 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:14.965 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:14.965 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 734007 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 734007 00:08:15.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 734007 Killed "${NVMF_APP[@]}" "$@" 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=740878 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 740878 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 740878 ']' 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.371 13:12:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:15.371 [2024-12-05 13:12:37.794018] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:15.371 [2024-12-05 13:12:37.794077] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.371 [2024-12-05 13:12:37.882320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.693 [2024-12-05 13:12:37.917961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.693 [2024-12-05 13:12:37.917996] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.693 [2024-12-05 13:12:37.918005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.693 [2024-12-05 13:12:37.918011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.693 [2024-12-05 13:12:37.918017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.693 [2024-12-05 13:12:37.918603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:16.264 [2024-12-05 13:12:38.794351] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:16.264 [2024-12-05 13:12:38.794442] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:16.264 [2024-12-05 13:12:38.794471] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8fa78c7a-04c8-4067-8e0d-f8c352c67e6a 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8fa78c7a-04c8-4067-8e0d-f8c352c67e6a 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.264 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:16.525 13:12:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8fa78c7a-04c8-4067-8e0d-f8c352c67e6a -t 2000 00:08:16.785 [ 00:08:16.785 { 00:08:16.785 "name": "8fa78c7a-04c8-4067-8e0d-f8c352c67e6a", 00:08:16.785 "aliases": [ 00:08:16.785 "lvs/lvol" 00:08:16.785 ], 00:08:16.785 "product_name": "Logical Volume", 00:08:16.785 "block_size": 4096, 00:08:16.785 "num_blocks": 38912, 00:08:16.785 "uuid": "8fa78c7a-04c8-4067-8e0d-f8c352c67e6a", 00:08:16.785 "assigned_rate_limits": { 00:08:16.785 "rw_ios_per_sec": 0, 00:08:16.785 "rw_mbytes_per_sec": 0, 00:08:16.785 "r_mbytes_per_sec": 0, 00:08:16.785 "w_mbytes_per_sec": 0 00:08:16.785 }, 00:08:16.785 "claimed": false, 00:08:16.785 "zoned": false, 00:08:16.785 "supported_io_types": { 00:08:16.785 "read": true, 00:08:16.785 "write": true, 00:08:16.785 "unmap": true, 00:08:16.785 "flush": false, 00:08:16.785 "reset": true, 00:08:16.785 "nvme_admin": false, 00:08:16.785 "nvme_io": false, 00:08:16.785 "nvme_io_md": false, 00:08:16.785 "write_zeroes": true, 00:08:16.785 "zcopy": false, 00:08:16.785 "get_zone_info": false, 00:08:16.785 "zone_management": false, 00:08:16.785 "zone_append": false, 00:08:16.785 "compare": false, 00:08:16.785 "compare_and_write": false, 00:08:16.785 "abort": false, 00:08:16.785 "seek_hole": true, 00:08:16.785 "seek_data": true, 00:08:16.785 "copy": false, 00:08:16.785 "nvme_iov_md": false 00:08:16.785 }, 00:08:16.785 "driver_specific": { 00:08:16.785 "lvol": { 00:08:16.785 "lvol_store_uuid": "9df07d2a-3040-45fb-a67b-6345fc352ef8", 00:08:16.785 "base_bdev": "aio_bdev", 00:08:16.785 "thin_provision": false, 00:08:16.785 "num_allocated_clusters": 38, 00:08:16.785 "snapshot": false, 00:08:16.785 "clone": false, 00:08:16.785 "esnap_clone": false 00:08:16.785 } 00:08:16.785 } 00:08:16.785 } 00:08:16.785 ] 00:08:16.785 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:16.785 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:16.785 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:16.785 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:16.785 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:16.785 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:17.046 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:17.046 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:17.307 [2024-12-05 13:12:39.650507] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:17.307 request: 00:08:17.307 { 00:08:17.307 "uuid": "9df07d2a-3040-45fb-a67b-6345fc352ef8", 00:08:17.307 "method": "bdev_lvol_get_lvstores", 00:08:17.307 "req_id": 1 00:08:17.307 } 00:08:17.307 Got JSON-RPC error response 00:08:17.307 response: 00:08:17.307 { 00:08:17.307 "code": -19, 00:08:17.307 "message": "No such device" 00:08:17.307 } 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:17.307 13:12:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:17.567 aio_bdev 00:08:17.567 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8fa78c7a-04c8-4067-8e0d-f8c352c67e6a 00:08:17.567 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8fa78c7a-04c8-4067-8e0d-f8c352c67e6a 00:08:17.567 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.567 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:17.567 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.567 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.567 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:17.829 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8fa78c7a-04c8-4067-8e0d-f8c352c67e6a -t 2000 00:08:17.829 [ 00:08:17.829 { 00:08:17.829 "name": "8fa78c7a-04c8-4067-8e0d-f8c352c67e6a", 00:08:17.829 "aliases": [ 00:08:17.829 "lvs/lvol" 00:08:17.829 ], 00:08:17.829 "product_name": "Logical Volume", 00:08:17.829 "block_size": 4096, 00:08:17.829 "num_blocks": 38912, 00:08:17.829 "uuid": "8fa78c7a-04c8-4067-8e0d-f8c352c67e6a", 00:08:17.829 "assigned_rate_limits": { 00:08:17.829 "rw_ios_per_sec": 0, 00:08:17.829 "rw_mbytes_per_sec": 0, 00:08:17.829 "r_mbytes_per_sec": 0, 00:08:17.829 "w_mbytes_per_sec": 0 00:08:17.829 }, 00:08:17.829 "claimed": false, 00:08:17.829 "zoned": false, 00:08:17.829 "supported_io_types": { 00:08:17.829 "read": true, 00:08:17.829 "write": true, 00:08:17.829 "unmap": true, 00:08:17.829 "flush": false, 00:08:17.829 "reset": true, 00:08:17.829 "nvme_admin": false, 00:08:17.829 "nvme_io": false, 00:08:17.829 "nvme_io_md": false, 00:08:17.829 "write_zeroes": true, 00:08:17.829 "zcopy": false, 00:08:17.829 "get_zone_info": false, 00:08:17.829 "zone_management": false, 00:08:17.829 "zone_append": false, 00:08:17.829 "compare": false, 00:08:17.829 "compare_and_write": false, 00:08:17.829 "abort": false, 00:08:17.829 "seek_hole": true, 00:08:17.829 "seek_data": true, 00:08:17.829 "copy": false, 00:08:17.829 "nvme_iov_md": false 00:08:17.829 }, 00:08:17.829 "driver_specific": { 00:08:17.829 "lvol": { 00:08:17.829 "lvol_store_uuid": "9df07d2a-3040-45fb-a67b-6345fc352ef8", 00:08:17.829 "base_bdev": "aio_bdev", 00:08:17.829 "thin_provision": false, 00:08:17.829 "num_allocated_clusters": 38, 00:08:17.829 "snapshot": false, 00:08:17.829 "clone": false, 00:08:17.829 "esnap_clone": false 00:08:17.829 } 00:08:17.829 } 00:08:17.829 } 00:08:17.829 ] 00:08:17.829 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:17.829 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:17.829 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:18.091 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:18.091 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:18.091 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:18.352 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:18.352 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8fa78c7a-04c8-4067-8e0d-f8c352c67e6a 00:08:18.352 13:12:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9df07d2a-3040-45fb-a67b-6345fc352ef8 00:08:18.613 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:18.873 00:08:18.873 real 0m17.653s 00:08:18.873 user 0m46.004s 00:08:18.873 sys 0m2.902s 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:18.873 ************************************ 00:08:18.873 END TEST lvs_grow_dirty 00:08:18.873 ************************************ 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:18.873 nvmf_trace.0 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:18.873 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:18.874 rmmod nvme_tcp 00:08:18.874 rmmod nvme_fabrics 00:08:18.874 rmmod nvme_keyring 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 740878 ']' 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 740878 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 740878 ']' 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 740878 00:08:18.874 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:19.134 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.134 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 740878 00:08:19.134 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 740878' 00:08:19.135 killing process with pid 740878 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 740878 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 740878 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.135 13:12:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:21.677 00:08:21.677 real 0m45.662s 00:08:21.677 user 1m8.231s 00:08:21.677 sys 0m10.927s 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.677 ************************************ 00:08:21.677 END TEST nvmf_lvs_grow 00:08:21.677 ************************************ 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:21.677 ************************************ 00:08:21.677 START TEST nvmf_bdev_io_wait 00:08:21.677 ************************************ 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:21.677 * Looking for test storage... 00:08:21.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.677 --rc genhtml_branch_coverage=1 00:08:21.677 --rc genhtml_function_coverage=1 00:08:21.677 --rc genhtml_legend=1 00:08:21.677 --rc geninfo_all_blocks=1 00:08:21.677 --rc geninfo_unexecuted_blocks=1 00:08:21.677 00:08:21.677 ' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.677 --rc genhtml_branch_coverage=1 00:08:21.677 --rc genhtml_function_coverage=1 00:08:21.677 --rc genhtml_legend=1 00:08:21.677 --rc geninfo_all_blocks=1 00:08:21.677 --rc geninfo_unexecuted_blocks=1 00:08:21.677 00:08:21.677 ' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.677 --rc genhtml_branch_coverage=1 00:08:21.677 --rc genhtml_function_coverage=1 00:08:21.677 --rc genhtml_legend=1 00:08:21.677 --rc geninfo_all_blocks=1 00:08:21.677 --rc geninfo_unexecuted_blocks=1 00:08:21.677 00:08:21.677 ' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:21.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:21.677 --rc genhtml_branch_coverage=1 00:08:21.677 --rc genhtml_function_coverage=1 00:08:21.677 --rc genhtml_legend=1 00:08:21.677 --rc geninfo_all_blocks=1 00:08:21.677 --rc geninfo_unexecuted_blocks=1 00:08:21.677 00:08:21.677 ' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:21.677 13:12:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:21.677 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:21.677 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:21.677 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:21.677 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:21.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:21.678 13:12:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:29.815 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.815 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:29.816 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:29.816 Found net devices under 0000:31:00.0: cvl_0_0 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:29.816 Found net devices under 0000:31:00.1: cvl_0_1 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:29.816 13:12:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:29.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.609 ms 00:08:29.816 00:08:29.816 --- 10.0.0.2 ping statistics --- 00:08:29.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.816 rtt min/avg/max/mdev = 0.609/0.609/0.609/0.000 ms 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:29.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:08:29.816 00:08:29.816 --- 10.0.0.1 ping statistics --- 00:08:29.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.816 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=746354 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 746354 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 746354 ']' 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.816 13:12:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:29.816 [2024-12-05 13:12:52.373007] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:29.816 [2024-12-05 13:12:52.373058] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.077 [2024-12-05 13:12:52.459070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.077 [2024-12-05 13:12:52.496110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.077 [2024-12-05 13:12:52.496144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.077 [2024-12-05 13:12:52.496151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.077 [2024-12-05 13:12:52.496158] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.077 [2024-12-05 13:12:52.496163] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.077 [2024-12-05 13:12:52.497706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.077 [2024-12-05 13:12:52.497822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.077 [2024-12-05 13:12:52.497978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.077 [2024-12-05 13:12:52.497978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.646 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.907 [2024-12-05 13:12:53.274482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.907 Malloc0 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:30.907 [2024-12-05 13:12:53.333830] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=746664 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=746667 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.907 { 00:08:30.907 "params": { 00:08:30.907 "name": "Nvme$subsystem", 00:08:30.907 "trtype": "$TEST_TRANSPORT", 00:08:30.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.907 "adrfam": "ipv4", 00:08:30.907 "trsvcid": "$NVMF_PORT", 00:08:30.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.907 "hdgst": ${hdgst:-false}, 00:08:30.907 "ddgst": ${ddgst:-false} 00:08:30.907 }, 00:08:30.907 "method": "bdev_nvme_attach_controller" 00:08:30.907 } 00:08:30.907 EOF 00:08:30.907 )") 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=746670 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=746673 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.907 { 00:08:30.907 "params": { 00:08:30.907 "name": "Nvme$subsystem", 00:08:30.907 "trtype": "$TEST_TRANSPORT", 00:08:30.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.907 "adrfam": "ipv4", 00:08:30.907 "trsvcid": "$NVMF_PORT", 00:08:30.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.907 "hdgst": ${hdgst:-false}, 00:08:30.907 "ddgst": ${ddgst:-false} 00:08:30.907 }, 00:08:30.907 "method": "bdev_nvme_attach_controller" 00:08:30.907 } 00:08:30.907 EOF 00:08:30.907 )") 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.907 { 00:08:30.907 "params": { 00:08:30.907 "name": "Nvme$subsystem", 00:08:30.907 "trtype": "$TEST_TRANSPORT", 00:08:30.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.907 "adrfam": "ipv4", 00:08:30.907 "trsvcid": "$NVMF_PORT", 00:08:30.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.907 "hdgst": ${hdgst:-false}, 00:08:30.907 "ddgst": ${ddgst:-false} 00:08:30.907 }, 00:08:30.907 "method": "bdev_nvme_attach_controller" 00:08:30.907 } 00:08:30.907 EOF 00:08:30.907 )") 00:08:30.907 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:30.908 { 00:08:30.908 "params": { 00:08:30.908 "name": "Nvme$subsystem", 00:08:30.908 "trtype": "$TEST_TRANSPORT", 00:08:30.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:30.908 "adrfam": "ipv4", 00:08:30.908 "trsvcid": "$NVMF_PORT", 00:08:30.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:30.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:30.908 "hdgst": ${hdgst:-false}, 00:08:30.908 "ddgst": ${ddgst:-false} 00:08:30.908 }, 00:08:30.908 "method": "bdev_nvme_attach_controller" 00:08:30.908 } 00:08:30.908 EOF 00:08:30.908 )") 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 746664 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.908 "params": { 00:08:30.908 "name": "Nvme1", 00:08:30.908 "trtype": "tcp", 00:08:30.908 "traddr": "10.0.0.2", 00:08:30.908 "adrfam": "ipv4", 00:08:30.908 "trsvcid": "4420", 00:08:30.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.908 "hdgst": false, 00:08:30.908 "ddgst": false 00:08:30.908 }, 00:08:30.908 "method": "bdev_nvme_attach_controller" 00:08:30.908 }' 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.908 "params": { 00:08:30.908 "name": "Nvme1", 00:08:30.908 "trtype": "tcp", 00:08:30.908 "traddr": "10.0.0.2", 00:08:30.908 "adrfam": "ipv4", 00:08:30.908 "trsvcid": "4420", 00:08:30.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.908 "hdgst": false, 00:08:30.908 "ddgst": false 00:08:30.908 }, 00:08:30.908 "method": "bdev_nvme_attach_controller" 00:08:30.908 }' 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.908 "params": { 00:08:30.908 "name": "Nvme1", 00:08:30.908 "trtype": "tcp", 00:08:30.908 "traddr": "10.0.0.2", 00:08:30.908 "adrfam": "ipv4", 00:08:30.908 "trsvcid": "4420", 00:08:30.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.908 "hdgst": false, 00:08:30.908 "ddgst": false 00:08:30.908 }, 00:08:30.908 "method": "bdev_nvme_attach_controller" 00:08:30.908 }' 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:30.908 13:12:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:30.908 "params": { 00:08:30.908 "name": "Nvme1", 00:08:30.908 "trtype": "tcp", 00:08:30.908 "traddr": "10.0.0.2", 00:08:30.908 "adrfam": "ipv4", 00:08:30.908 "trsvcid": "4420", 00:08:30.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:30.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:30.908 "hdgst": false, 00:08:30.908 "ddgst": false 00:08:30.908 }, 00:08:30.908 "method": "bdev_nvme_attach_controller" 00:08:30.908 }' 00:08:30.908 [2024-12-05 13:12:53.390703] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:30.908 [2024-12-05 13:12:53.390756] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:30.908 [2024-12-05 13:12:53.391506] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:30.908 [2024-12-05 13:12:53.391557] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:30.908 [2024-12-05 13:12:53.391859] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:30.908 [2024-12-05 13:12:53.391912] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:30.908 [2024-12-05 13:12:53.392748] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:30.908 [2024-12-05 13:12:53.392794] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:31.168 [2024-12-05 13:12:53.565920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.168 [2024-12-05 13:12:53.593926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:31.168 [2024-12-05 13:12:53.626334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.168 [2024-12-05 13:12:53.656064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:31.168 [2024-12-05 13:12:53.672050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.168 [2024-12-05 13:12:53.700421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:31.168 [2024-12-05 13:12:53.721227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.428 [2024-12-05 13:12:53.749551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:31.428 Running I/O for 1 seconds... 00:08:31.428 Running I/O for 1 seconds... 00:08:31.428 Running I/O for 1 seconds... 00:08:31.687 Running I/O for 1 seconds... 00:08:32.257 181448.00 IOPS, 708.78 MiB/s 00:08:32.257 Latency(us) 00:08:32.257 [2024-12-05T12:12:54.825Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.257 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:32.257 Nvme1n1 : 1.00 181088.21 707.38 0.00 0.00 702.70 302.08 1966.08 00:08:32.257 [2024-12-05T12:12:54.825Z] =================================================================================================================== 00:08:32.257 [2024-12-05T12:12:54.825Z] Total : 181088.21 707.38 0.00 0.00 702.70 302.08 1966.08 00:08:32.518 8435.00 IOPS, 32.95 MiB/s 00:08:32.518 Latency(us) 00:08:32.518 [2024-12-05T12:12:55.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.518 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:32.518 Nvme1n1 : 1.02 8464.53 33.06 0.00 0.00 14990.88 6635.52 24903.68 00:08:32.518 [2024-12-05T12:12:55.086Z] =================================================================================================================== 00:08:32.518 [2024-12-05T12:12:55.086Z] Total : 8464.53 33.06 0.00 0.00 14990.88 6635.52 24903.68 00:08:32.518 12786.00 IOPS, 49.95 MiB/s 00:08:32.518 Latency(us) 00:08:32.518 [2024-12-05T12:12:55.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.518 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:32.518 Nvme1n1 : 1.01 12824.34 50.10 0.00 0.00 9943.29 5379.41 20971.52 00:08:32.518 [2024-12-05T12:12:55.086Z] =================================================================================================================== 00:08:32.518 [2024-12-05T12:12:55.086Z] Total : 12824.34 50.10 0.00 0.00 9943.29 5379.41 20971.52 00:08:32.518 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 746667 00:08:32.518 8520.00 IOPS, 33.28 MiB/s 00:08:32.518 Latency(us) 00:08:32.518 [2024-12-05T12:12:55.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.518 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:32.518 Nvme1n1 : 1.01 8641.87 33.76 0.00 0.00 14777.42 3659.09 38229.33 00:08:32.518 [2024-12-05T12:12:55.086Z] =================================================================================================================== 00:08:32.518 [2024-12-05T12:12:55.086Z] Total : 8641.87 33.76 0.00 0.00 14777.42 3659.09 38229.33 00:08:32.518 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 746670 00:08:32.518 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 746673 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:32.779 rmmod nvme_tcp 00:08:32.779 rmmod nvme_fabrics 00:08:32.779 rmmod nvme_keyring 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 746354 ']' 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 746354 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 746354 ']' 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 746354 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 746354 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 746354' 00:08:32.779 killing process with pid 746354 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 746354 00:08:32.779 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 746354 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:33.040 13:12:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.957 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:34.957 00:08:34.957 real 0m13.694s 00:08:34.957 user 0m19.194s 00:08:34.957 sys 0m7.752s 00:08:34.957 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.957 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:34.957 ************************************ 00:08:34.957 END TEST nvmf_bdev_io_wait 00:08:34.957 ************************************ 00:08:34.957 13:12:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:34.957 13:12:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:34.957 13:12:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.957 13:12:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:35.220 ************************************ 00:08:35.220 START TEST nvmf_queue_depth 00:08:35.220 ************************************ 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:35.220 * Looking for test storage... 00:08:35.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:35.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.220 --rc genhtml_branch_coverage=1 00:08:35.220 --rc genhtml_function_coverage=1 00:08:35.220 --rc genhtml_legend=1 00:08:35.220 --rc geninfo_all_blocks=1 00:08:35.220 --rc geninfo_unexecuted_blocks=1 00:08:35.220 00:08:35.220 ' 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:35.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.220 --rc genhtml_branch_coverage=1 00:08:35.220 --rc genhtml_function_coverage=1 00:08:35.220 --rc genhtml_legend=1 00:08:35.220 --rc geninfo_all_blocks=1 00:08:35.220 --rc geninfo_unexecuted_blocks=1 00:08:35.220 00:08:35.220 ' 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:35.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.220 --rc genhtml_branch_coverage=1 00:08:35.220 --rc genhtml_function_coverage=1 00:08:35.220 --rc genhtml_legend=1 00:08:35.220 --rc geninfo_all_blocks=1 00:08:35.220 --rc geninfo_unexecuted_blocks=1 00:08:35.220 00:08:35.220 ' 00:08:35.220 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:35.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:35.220 --rc genhtml_branch_coverage=1 00:08:35.220 --rc genhtml_function_coverage=1 00:08:35.221 --rc genhtml_legend=1 00:08:35.221 --rc geninfo_all_blocks=1 00:08:35.221 --rc geninfo_unexecuted_blocks=1 00:08:35.221 00:08:35.221 ' 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:35.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:35.221 13:12:57 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:43.363 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:43.364 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:43.364 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:43.364 Found net devices under 0000:31:00.0: cvl_0_0 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:43.364 Found net devices under 0000:31:00.1: cvl_0_1 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:43.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:43.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:08:43.364 00:08:43.364 --- 10.0.0.2 ping statistics --- 00:08:43.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.364 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:43.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:43.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:08:43.364 00:08:43.364 --- 10.0.0.1 ping statistics --- 00:08:43.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.364 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=751780 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 751780 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 751780 ']' 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.364 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.365 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.365 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.365 13:13:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:43.625 [2024-12-05 13:13:05.980208] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:43.625 [2024-12-05 13:13:05.980272] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.625 [2024-12-05 13:13:06.086976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.625 [2024-12-05 13:13:06.122555] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:43.625 [2024-12-05 13:13:06.122586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:43.625 [2024-12-05 13:13:06.122595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:43.625 [2024-12-05 13:13:06.122602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:43.625 [2024-12-05 13:13:06.122608] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:43.625 [2024-12-05 13:13:06.123212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 [2024-12-05 13:13:06.811129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 Malloc0 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.567 [2024-12-05 13:13:06.872306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=751905 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 751905 /var/tmp/bdevperf.sock 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 751905 ']' 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:44.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:44.567 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.568 13:13:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:44.568 [2024-12-05 13:13:06.939110] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:08:44.568 [2024-12-05 13:13:06.939179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid751905 ] 00:08:44.568 [2024-12-05 13:13:07.022156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.568 [2024-12-05 13:13:07.063856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.506 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.506 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:45.506 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:45.506 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.506 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.506 NVMe0n1 00:08:45.506 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.506 13:13:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:45.506 Running I/O for 10 seconds... 00:08:47.462 8656.00 IOPS, 33.81 MiB/s [2024-12-05T12:13:11.415Z] 9186.50 IOPS, 35.88 MiB/s [2024-12-05T12:13:12.357Z] 9532.33 IOPS, 37.24 MiB/s [2024-12-05T12:13:13.300Z] 10102.25 IOPS, 39.46 MiB/s [2024-12-05T12:13:14.243Z] 10442.20 IOPS, 40.79 MiB/s [2024-12-05T12:13:15.188Z] 10704.67 IOPS, 41.82 MiB/s [2024-12-05T12:13:16.160Z] 10827.43 IOPS, 42.29 MiB/s [2024-12-05T12:13:17.100Z] 10951.50 IOPS, 42.78 MiB/s [2024-12-05T12:13:18.485Z] 11035.00 IOPS, 43.11 MiB/s [2024-12-05T12:13:18.485Z] 11075.60 IOPS, 43.26 MiB/s 00:08:55.917 Latency(us) 00:08:55.917 [2024-12-05T12:13:18.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.917 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:55.917 Verification LBA range: start 0x0 length 0x4000 00:08:55.917 NVMe0n1 : 10.05 11120.62 43.44 0.00 0.00 91724.92 5570.56 77332.48 00:08:55.917 [2024-12-05T12:13:18.485Z] =================================================================================================================== 00:08:55.917 [2024-12-05T12:13:18.485Z] Total : 11120.62 43.44 0.00 0.00 91724.92 5570.56 77332.48 00:08:55.917 { 00:08:55.917 "results": [ 00:08:55.917 { 00:08:55.917 "job": "NVMe0n1", 00:08:55.917 "core_mask": "0x1", 00:08:55.917 "workload": "verify", 00:08:55.917 "status": "finished", 00:08:55.917 "verify_range": { 00:08:55.917 "start": 0, 00:08:55.917 "length": 16384 00:08:55.917 }, 00:08:55.917 "queue_depth": 1024, 00:08:55.917 "io_size": 4096, 00:08:55.917 "runtime": 10.051599, 00:08:55.917 "iops": 11120.61871946941, 00:08:55.917 "mibps": 43.439916872927384, 00:08:55.917 "io_failed": 0, 00:08:55.917 "io_timeout": 0, 00:08:55.917 "avg_latency_us": 91724.92416007633, 00:08:55.917 "min_latency_us": 5570.56, 00:08:55.917 "max_latency_us": 77332.48 00:08:55.917 } 00:08:55.917 ], 00:08:55.917 "core_count": 1 00:08:55.917 } 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 751905 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 751905 ']' 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 751905 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 751905 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 751905' 00:08:55.917 killing process with pid 751905 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 751905 00:08:55.917 Received shutdown signal, test time was about 10.000000 seconds 00:08:55.917 00:08:55.917 Latency(us) 00:08:55.917 [2024-12-05T12:13:18.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.917 [2024-12-05T12:13:18.485Z] =================================================================================================================== 00:08:55.917 [2024-12-05T12:13:18.485Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 751905 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.917 rmmod nvme_tcp 00:08:55.917 rmmod nvme_fabrics 00:08:55.917 rmmod nvme_keyring 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 751780 ']' 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 751780 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 751780 ']' 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 751780 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 751780 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:55.917 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:55.918 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 751780' 00:08:55.918 killing process with pid 751780 00:08:55.918 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 751780 00:08:55.918 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 751780 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.178 13:13:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.091 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:58.091 00:08:58.091 real 0m23.091s 00:08:58.091 user 0m25.967s 00:08:58.091 sys 0m7.420s 00:08:58.091 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.091 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:58.091 ************************************ 00:08:58.091 END TEST nvmf_queue_depth 00:08:58.091 ************************************ 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.352 ************************************ 00:08:58.352 START TEST nvmf_target_multipath 00:08:58.352 ************************************ 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:58.352 * Looking for test storage... 00:08:58.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.352 --rc genhtml_branch_coverage=1 00:08:58.352 --rc genhtml_function_coverage=1 00:08:58.352 --rc genhtml_legend=1 00:08:58.352 --rc geninfo_all_blocks=1 00:08:58.352 --rc geninfo_unexecuted_blocks=1 00:08:58.352 00:08:58.352 ' 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.352 --rc genhtml_branch_coverage=1 00:08:58.352 --rc genhtml_function_coverage=1 00:08:58.352 --rc genhtml_legend=1 00:08:58.352 --rc geninfo_all_blocks=1 00:08:58.352 --rc geninfo_unexecuted_blocks=1 00:08:58.352 00:08:58.352 ' 00:08:58.352 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.352 --rc genhtml_branch_coverage=1 00:08:58.352 --rc genhtml_function_coverage=1 00:08:58.352 --rc genhtml_legend=1 00:08:58.352 --rc geninfo_all_blocks=1 00:08:58.352 --rc geninfo_unexecuted_blocks=1 00:08:58.352 00:08:58.352 ' 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.614 --rc genhtml_branch_coverage=1 00:08:58.614 --rc genhtml_function_coverage=1 00:08:58.614 --rc genhtml_legend=1 00:08:58.614 --rc geninfo_all_blocks=1 00:08:58.614 --rc geninfo_unexecuted_blocks=1 00:08:58.614 00:08:58.614 ' 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.614 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:58.615 13:13:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.747 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:06.748 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:06.748 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:06.748 Found net devices under 0000:31:00.0: cvl_0_0 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:06.748 Found net devices under 0000:31:00.1: cvl_0_1 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.748 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:07.008 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:07.008 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:07.008 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:07.008 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:07.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:07.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.699 ms 00:09:07.009 00:09:07.009 --- 10.0.0.2 ping statistics --- 00:09:07.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.009 rtt min/avg/max/mdev = 0.699/0.699/0.699/0.000 ms 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:07.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:07.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:09:07.009 00:09:07.009 --- 10.0.0.1 ping statistics --- 00:09:07.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:07.009 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:07.009 only one NIC for nvmf test 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.009 rmmod nvme_tcp 00:09:07.009 rmmod nvme_fabrics 00:09:07.009 rmmod nvme_keyring 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.009 13:13:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:09.554 00:09:09.554 real 0m10.958s 00:09:09.554 user 0m2.379s 00:09:09.554 sys 0m6.512s 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:09.554 ************************************ 00:09:09.554 END TEST nvmf_target_multipath 00:09:09.554 ************************************ 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:09.554 ************************************ 00:09:09.554 START TEST nvmf_zcopy 00:09:09.554 ************************************ 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:09.554 * Looking for test storage... 00:09:09.554 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:09.554 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.555 --rc genhtml_branch_coverage=1 00:09:09.555 --rc genhtml_function_coverage=1 00:09:09.555 --rc genhtml_legend=1 00:09:09.555 --rc geninfo_all_blocks=1 00:09:09.555 --rc geninfo_unexecuted_blocks=1 00:09:09.555 00:09:09.555 ' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.555 --rc genhtml_branch_coverage=1 00:09:09.555 --rc genhtml_function_coverage=1 00:09:09.555 --rc genhtml_legend=1 00:09:09.555 --rc geninfo_all_blocks=1 00:09:09.555 --rc geninfo_unexecuted_blocks=1 00:09:09.555 00:09:09.555 ' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.555 --rc genhtml_branch_coverage=1 00:09:09.555 --rc genhtml_function_coverage=1 00:09:09.555 --rc genhtml_legend=1 00:09:09.555 --rc geninfo_all_blocks=1 00:09:09.555 --rc geninfo_unexecuted_blocks=1 00:09:09.555 00:09:09.555 ' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:09.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.555 --rc genhtml_branch_coverage=1 00:09:09.555 --rc genhtml_function_coverage=1 00:09:09.555 --rc genhtml_legend=1 00:09:09.555 --rc geninfo_all_blocks=1 00:09:09.555 --rc geninfo_unexecuted_blocks=1 00:09:09.555 00:09:09.555 ' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:09.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:09.555 13:13:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.555 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:09.555 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:09.555 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:09.555 13:13:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.699 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:17.700 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:17.700 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:17.700 Found net devices under 0000:31:00.0: cvl_0_0 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:17.700 Found net devices under 0000:31:00.1: cvl_0_1 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.700 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:09:17.961 00:09:17.961 --- 10.0.0.2 ping statistics --- 00:09:17.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.961 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:09:17.961 00:09:17.961 --- 10.0.0.1 ping statistics --- 00:09:17.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.961 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=763864 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 763864 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 763864 ']' 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.961 13:13:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:17.961 [2024-12-05 13:13:40.466877] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:09:17.961 [2024-12-05 13:13:40.466930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.222 [2024-12-05 13:13:40.570683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.222 [2024-12-05 13:13:40.616493] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.222 [2024-12-05 13:13:40.616545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.222 [2024-12-05 13:13:40.616553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.222 [2024-12-05 13:13:40.616560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.222 [2024-12-05 13:13:40.616570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.222 [2024-12-05 13:13:40.617359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.793 [2024-12-05 13:13:41.312271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.793 [2024-12-05 13:13:41.336584] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.793 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.054 malloc0 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:19.054 { 00:09:19.054 "params": { 00:09:19.054 "name": "Nvme$subsystem", 00:09:19.054 "trtype": "$TEST_TRANSPORT", 00:09:19.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:19.054 "adrfam": "ipv4", 00:09:19.054 "trsvcid": "$NVMF_PORT", 00:09:19.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:19.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:19.054 "hdgst": ${hdgst:-false}, 00:09:19.054 "ddgst": ${ddgst:-false} 00:09:19.054 }, 00:09:19.054 "method": "bdev_nvme_attach_controller" 00:09:19.054 } 00:09:19.054 EOF 00:09:19.054 )") 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:19.054 13:13:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:19.054 "params": { 00:09:19.054 "name": "Nvme1", 00:09:19.054 "trtype": "tcp", 00:09:19.054 "traddr": "10.0.0.2", 00:09:19.054 "adrfam": "ipv4", 00:09:19.054 "trsvcid": "4420", 00:09:19.054 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:19.054 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:19.054 "hdgst": false, 00:09:19.054 "ddgst": false 00:09:19.054 }, 00:09:19.054 "method": "bdev_nvme_attach_controller" 00:09:19.054 }' 00:09:19.054 [2024-12-05 13:13:41.438966] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:09:19.054 [2024-12-05 13:13:41.439028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid763999 ] 00:09:19.054 [2024-12-05 13:13:41.521386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.054 [2024-12-05 13:13:41.563123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.315 Running I/O for 10 seconds... 00:09:21.213 6701.00 IOPS, 52.35 MiB/s [2024-12-05T12:13:45.163Z] 6759.00 IOPS, 52.80 MiB/s [2024-12-05T12:13:45.732Z] 7722.00 IOPS, 60.33 MiB/s [2024-12-05T12:13:47.116Z] 8244.25 IOPS, 64.41 MiB/s [2024-12-05T12:13:48.055Z] 8560.20 IOPS, 66.88 MiB/s [2024-12-05T12:13:48.996Z] 8774.00 IOPS, 68.55 MiB/s [2024-12-05T12:13:49.936Z] 8924.71 IOPS, 69.72 MiB/s [2024-12-05T12:13:50.877Z] 9038.62 IOPS, 70.61 MiB/s [2024-12-05T12:13:51.817Z] 9122.67 IOPS, 71.27 MiB/s [2024-12-05T12:13:51.817Z] 9196.40 IOPS, 71.85 MiB/s 00:09:29.249 Latency(us) 00:09:29.249 [2024-12-05T12:13:51.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:29.249 Verification LBA range: start 0x0 length 0x1000 00:09:29.249 Nvme1n1 : 10.01 9198.26 71.86 0.00 0.00 13864.66 1324.37 28398.93 00:09:29.249 [2024-12-05T12:13:51.817Z] =================================================================================================================== 00:09:29.249 [2024-12-05T12:13:51.817Z] Total : 9198.26 71.86 0.00 0.00 13864.66 1324.37 28398.93 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=766076 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:29.509 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:29.509 { 00:09:29.509 "params": { 00:09:29.510 "name": "Nvme$subsystem", 00:09:29.510 "trtype": "$TEST_TRANSPORT", 00:09:29.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:29.510 "adrfam": "ipv4", 00:09:29.510 "trsvcid": "$NVMF_PORT", 00:09:29.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:29.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:29.510 "hdgst": ${hdgst:-false}, 00:09:29.510 "ddgst": ${ddgst:-false} 00:09:29.510 }, 00:09:29.510 "method": "bdev_nvme_attach_controller" 00:09:29.510 } 00:09:29.510 EOF 00:09:29.510 )") 00:09:29.510 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:29.510 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:29.510 [2024-12-05 13:13:51.869366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.869399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:29.510 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:29.510 13:13:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:29.510 "params": { 00:09:29.510 "name": "Nvme1", 00:09:29.510 "trtype": "tcp", 00:09:29.510 "traddr": "10.0.0.2", 00:09:29.510 "adrfam": "ipv4", 00:09:29.510 "trsvcid": "4420", 00:09:29.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:29.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:29.510 "hdgst": false, 00:09:29.510 "ddgst": false 00:09:29.510 }, 00:09:29.510 "method": "bdev_nvme_attach_controller" 00:09:29.510 }' 00:09:29.510 [2024-12-05 13:13:51.881362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.881371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.893389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.893397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.905421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.905429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.914173] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:09:29.510 [2024-12-05 13:13:51.914221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid766076 ] 00:09:29.510 [2024-12-05 13:13:51.917452] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.917461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.929483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.929491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.941512] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.941520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.953542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.953550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.965573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.965580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.977604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.977612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.989632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:51.989640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:51.990426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.510 [2024-12-05 13:13:52.001664] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:52.001673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:52.013694] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:52.013707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:52.025724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:52.025736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:52.026141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.510 [2024-12-05 13:13:52.037757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:52.037766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:52.049787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:52.049800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:52.061816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:52.061828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.510 [2024-12-05 13:13:52.073845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.510 [2024-12-05 13:13:52.073854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.085879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.085887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.097910] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.097917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.109951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.109969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.121974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.121984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.134002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.134012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.146033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.146041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.158065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.158073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.170097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.170105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.182130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.182140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.194161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.194168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.206192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.206199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.218225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.218232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.230258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.230267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.242287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.770 [2024-12-05 13:13:52.242294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.770 [2024-12-05 13:13:52.254317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.771 [2024-12-05 13:13:52.254323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.771 [2024-12-05 13:13:52.266348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.771 [2024-12-05 13:13:52.266355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.771 [2024-12-05 13:13:52.278380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.771 [2024-12-05 13:13:52.278388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.771 [2024-12-05 13:13:52.290410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.771 [2024-12-05 13:13:52.290417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.771 [2024-12-05 13:13:52.302442] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.771 [2024-12-05 13:13:52.302448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.771 [2024-12-05 13:13:52.314474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.771 [2024-12-05 13:13:52.314482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.771 [2024-12-05 13:13:52.326516] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.771 [2024-12-05 13:13:52.326530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 [2024-12-05 13:13:52.338542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.338551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 Running I/O for 5 seconds... 00:09:30.031 [2024-12-05 13:13:52.353011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.353027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 [2024-12-05 13:13:52.366453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.366471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 [2024-12-05 13:13:52.379945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.379961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 [2024-12-05 13:13:52.393146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.393162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 [2024-12-05 13:13:52.406340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.406355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 [2024-12-05 13:13:52.419712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.419727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.031 [2024-12-05 13:13:52.433081] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.031 [2024-12-05 13:13:52.433096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.446738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.446753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.460274] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.460290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.473443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.473461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.486679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.486695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.500113] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.500128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.513144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.513159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.525853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.525872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.538033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.538048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.551107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.551122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.564374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.564389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.576540] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.576555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.032 [2024-12-05 13:13:52.589858] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.032 [2024-12-05 13:13:52.589876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.602850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.602869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.614868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.614883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.628266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.628281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.641691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.641706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.654829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.654843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.668029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.668045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.680080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.680095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.693362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.693377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.705750] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.705765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.719293] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.719312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.732301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.732317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.745503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.745518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.758988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.759003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.771637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.771651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.784172] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.784188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.797114] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.797128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.810097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.810112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.823367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.823382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.835891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.835906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.292 [2024-12-05 13:13:52.848543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.292 [2024-12-05 13:13:52.848559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.860732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.860747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.874084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.874099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.886558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.886573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.899194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.899209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.912602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.912617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.926027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.926043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.938504] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.938520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.552 [2024-12-05 13:13:52.951164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.552 [2024-12-05 13:13:52.951180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:52.964564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:52.964586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:52.977306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:52.977321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:52.991070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:52.991085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.004323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.004338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.016824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.016839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.029866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.029881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.043369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.043384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.056243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.056257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.069364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.069379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.082541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.082556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.095834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.095848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.553 [2024-12-05 13:13:53.109042] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.553 [2024-12-05 13:13:53.109057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.122375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.122390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.135097] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.135112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.147631] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.147647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.160731] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.160746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.174060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.174075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.186926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.186942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.199726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.199741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.213234] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.213250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.226212] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.226228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.239493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.239508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.252831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.252847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.265879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.265895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.279217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.279232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.292165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.292181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.305607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.305622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.318506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.318522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.331259] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.331274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 [2024-12-05 13:13:53.343421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.813 [2024-12-05 13:13:53.343437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.813 19371.00 IOPS, 151.34 MiB/s [2024-12-05T12:13:53.382Z] [2024-12-05 13:13:53.356806] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.814 [2024-12-05 13:13:53.356821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.814 [2024-12-05 13:13:53.370261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.814 [2024-12-05 13:13:53.370277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.383361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.383377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.396498] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.396514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.409711] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.409726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.423058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.423073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.436135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.436150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.449353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.449368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.462669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.462685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.475890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.475905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.488804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.488820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.502237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.502253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.515423] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.515438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.528595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.528610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.541047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.541062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.553630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.553645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.566956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.566971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.580000] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.580015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.593174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.593189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.605805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.605820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.618474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.618488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.075 [2024-12-05 13:13:53.631235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.075 [2024-12-05 13:13:53.631250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.644414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.644430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.657306] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.657321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.670941] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.670957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.684395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.684410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.697973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.697993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.711409] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.711425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.724929] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.724944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.738241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.738257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.751496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.751511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.764767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.764782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.778010] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.778026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.791391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.791407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.804810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.804826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.818076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.818092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.831184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.831200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.844216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.844231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.857434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.857450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.870845] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.870860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.883897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.883914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.336 [2024-12-05 13:13:53.896191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.336 [2024-12-05 13:13:53.896206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:53.909407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:53.909423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:53.922406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:53.922421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:53.935539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:53.935554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:53.948945] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:53.948964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:53.961473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:53.961488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:53.973843] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:53.973859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:53.987310] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:53.987325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.000679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.000694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.013986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.014002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.027125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.027139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.040160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.040175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.053408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.053424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.067125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.067139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.080197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.080212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.093679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.093694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.107002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.107017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.120280] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.120295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.133509] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.133524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.146948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.146962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.597 [2024-12-05 13:13:54.160370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.597 [2024-12-05 13:13:54.160385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.173545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.173560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.187064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.187079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.200330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.200349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.213743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.213758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.227239] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.227254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.240359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.240374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.253518] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.253533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.266282] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.266297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.279008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.279023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.291983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.291998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.304924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.304939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.318332] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.318347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.331507] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.331522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.344182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.344197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 19409.00 IOPS, 151.63 MiB/s [2024-12-05T12:13:54.425Z] [2024-12-05 13:13:54.357992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.358007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.371233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.371248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.384730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.384746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.398099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.398114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.410390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.410405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:31.857 [2024-12-05 13:13:54.423866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:31.857 [2024-12-05 13:13:54.423882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.117 [2024-12-05 13:13:54.436353] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.117 [2024-12-05 13:13:54.436369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.117 [2024-12-05 13:13:54.448506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.117 [2024-12-05 13:13:54.448525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.117 [2024-12-05 13:13:54.461921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.117 [2024-12-05 13:13:54.461935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.117 [2024-12-05 13:13:54.475142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.475157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.488634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.488649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.501441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.501456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.514247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.514262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.527519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.527534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.540825] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.540841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.553915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.553937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.567152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.567167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.580537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.580552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.593460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.593475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.606857] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.606875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.619544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.619559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.631659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.631674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.644142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.644157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.657148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.657163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.670473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.670488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.118 [2024-12-05 13:13:54.683453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.118 [2024-12-05 13:13:54.683468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.378 [2024-12-05 13:13:54.696878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.378 [2024-12-05 13:13:54.696894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.378 [2024-12-05 13:13:54.710315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.378 [2024-12-05 13:13:54.710330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.378 [2024-12-05 13:13:54.723472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.378 [2024-12-05 13:13:54.723487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.378 [2024-12-05 13:13:54.736004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.378 [2024-12-05 13:13:54.736019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.378 [2024-12-05 13:13:54.749092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.749107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.761720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.761734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.774608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.774623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.788125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.788140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.800987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.801001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.813646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.813661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.826464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.826478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.839354] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.839369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.852962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.852978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.865158] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.865173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.878501] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.878516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.890947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.890963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.903775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.903790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.916840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.916856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.930290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.930305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.379 [2024-12-05 13:13:54.943342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.379 [2024-12-05 13:13:54.943357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:54.956228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:54.956244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:54.969600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:54.969615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:54.982519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:54.982534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:54.995703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:54.995719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.008312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.008328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.021851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.021871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.034625] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.034641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.047873] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.047889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.060721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.060737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.073127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.073142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.085895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.085910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.099036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.099051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.112651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.112666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.125840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.125855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.138675] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.138690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.151474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.151489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.164374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.164390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.177074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.177090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.190054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.190069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.639 [2024-12-05 13:13:55.203476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.639 [2024-12-05 13:13:55.203491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.217155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.217171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.230312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.230327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.243628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.243643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.256393] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.256409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.269380] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.269395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.282030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.282052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.295159] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.295174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.308200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.898 [2024-12-05 13:13:55.308215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.898 [2024-12-05 13:13:55.320785] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.320799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.333179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.333195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.345738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.345753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 19433.33 IOPS, 151.82 MiB/s [2024-12-05T12:13:55.467Z] [2024-12-05 13:13:55.358730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.358745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.370818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.370833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.383969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.383985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.397044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.397059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.410032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.410047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.423312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.423332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.436663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.436679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.449993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.450009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:32.899 [2024-12-05 13:13:55.463595] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:32.899 [2024-12-05 13:13:55.463610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.158 [2024-12-05 13:13:55.476890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.158 [2024-12-05 13:13:55.476906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.158 [2024-12-05 13:13:55.490196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.158 [2024-12-05 13:13:55.490212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.503703] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.503719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.515760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.515776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.528897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.528912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.541668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.541683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.555124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.555139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.568358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.568373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.581829] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.581844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.594549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.594564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.607562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.607577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.620443] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.620458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.633147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.633162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.645622] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.645637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.658668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.658683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.671940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.671958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.684499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.684514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.697888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.697903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.710726] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.710741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.159 [2024-12-05 13:13:55.723892] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.159 [2024-12-05 13:13:55.723907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.737251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.737266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.749748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.749763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.763243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.763258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.776131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.776146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.788748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.788763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.801118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.801132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.814279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.814295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.827840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.827855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.840284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.840299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.853419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.853435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.866605] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.866620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.879404] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.879419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.892245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.892260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.905390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.905405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.918698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.419 [2024-12-05 13:13:55.918717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.419 [2024-12-05 13:13:55.932067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.420 [2024-12-05 13:13:55.932083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.420 [2024-12-05 13:13:55.944846] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.420 [2024-12-05 13:13:55.944865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.420 [2024-12-05 13:13:55.957589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.420 [2024-12-05 13:13:55.957604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.420 [2024-12-05 13:13:55.970399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.420 [2024-12-05 13:13:55.970414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.420 [2024-12-05 13:13:55.983566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.420 [2024-12-05 13:13:55.983581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.679 [2024-12-05 13:13:55.996927] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.679 [2024-12-05 13:13:55.996942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.679 [2024-12-05 13:13:56.009658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.679 [2024-12-05 13:13:56.009673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.679 [2024-12-05 13:13:56.023555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.679 [2024-12-05 13:13:56.023570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.679 [2024-12-05 13:13:56.036777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.679 [2024-12-05 13:13:56.036792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.679 [2024-12-05 13:13:56.049451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.049466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.062810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.062825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.076111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.076126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.088833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.088848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.101366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.101381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.113764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.113779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.126645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.126660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.139652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.139667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.152331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.152346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.165780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.165795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.178230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.178245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.190935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.190950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.203012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.203027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.215296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.215311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.228535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.228550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.680 [2024-12-05 13:13:56.242077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.680 [2024-12-05 13:13:56.242092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.254980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.254995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.268052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.268067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.281412] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.281427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.294126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.294141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.306668] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.306683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.319834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.319849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.332447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.332461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.345808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.345822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 19444.75 IOPS, 151.91 MiB/s [2024-12-05T12:13:56.507Z] [2024-12-05 13:13:56.358311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.358326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.371465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.371480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.384792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.384807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.398123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.398138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.411364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.411379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.424316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.424331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.437591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.437606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.450350] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.450366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.463366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.463380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.476792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.476807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.490389] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.490404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.939 [2024-12-05 13:13:56.503637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:33.939 [2024-12-05 13:13:56.503652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.517369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.517385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.530596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.530611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.543798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.543813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.557119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.557134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.569399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.569413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.582343] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.582357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.595488] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.595503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.608778] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.608793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.622144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.622160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.635272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.635287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.648453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.648469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.660953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.660969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.673112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.673128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.685840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.685856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.698920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.698936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.712256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.712271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.725515] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.725530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.738911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.738927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.752688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.752704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.198 [2024-12-05 13:13:56.765013] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.198 [2024-12-05 13:13:56.765028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.778318] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.778335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.791124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.791139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.804375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.804390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.817161] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.817176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.830201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.830216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.843640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.843655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.856615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.856631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.869061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.869077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.881609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.881624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.894137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.894156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.907788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.907804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.920611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.920626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.933567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.933582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.947017] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.947033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.960197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.960212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.973487] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.973503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:56.986497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:56.986512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:57.000052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:57.000067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.457 [2024-12-05 13:13:57.012666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.457 [2024-12-05 13:13:57.012681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.025019] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.025035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.037506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.037521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.050770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.050785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.064091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.064107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.077564] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.077580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.090640] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.090656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.104078] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.104093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.117103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.117118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.130573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.130588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.143976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.143996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.156558] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.156573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.717 [2024-12-05 13:13:57.169765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.717 [2024-12-05 13:13:57.169780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.183316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.183331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.195872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.195887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.208736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.208751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.221537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.221552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.234624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.234639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.247719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.247735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.260844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.260859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.718 [2024-12-05 13:13:57.274162] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.718 [2024-12-05 13:13:57.274178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.287401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.287418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.300419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.300435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.313462] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.313478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.326269] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.326284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.339330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.339346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.352407] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.352422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 19440.40 IOPS, 151.88 MiB/s 00:09:34.978 Latency(us) 00:09:34.978 [2024-12-05T12:13:57.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.978 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:34.978 Nvme1n1 : 5.01 19441.22 151.88 0.00 0.00 6577.12 2607.79 12342.61 00:09:34.978 [2024-12-05T12:13:57.546Z] =================================================================================================================== 00:09:34.978 [2024-12-05T12:13:57.546Z] Total : 19441.22 151.88 0.00 0.00 6577.12 2607.79 12342.61 00:09:34.978 [2024-12-05 13:13:57.362257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.362271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.374284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.374295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.386317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.386328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.398345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.398357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.410373] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.410383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.422402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.422411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.434431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.434440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.446461] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.446468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.458493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.458503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 [2024-12-05 13:13:57.470521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:34.978 [2024-12-05 13:13:57.470529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:34.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (766076) - No such process 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 766076 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.978 delay0 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.978 13:13:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:35.238 [2024-12-05 13:13:57.671049] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:43.490 Initializing NVMe Controllers 00:09:43.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:43.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:43.490 Initialization complete. Launching workers. 00:09:43.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 237, failed: 30167 00:09:43.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 30275, failed to submit 129 00:09:43.490 success 30195, unsuccessful 80, failed 0 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.490 rmmod nvme_tcp 00:09:43.490 rmmod nvme_fabrics 00:09:43.490 rmmod nvme_keyring 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 763864 ']' 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 763864 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 763864 ']' 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 763864 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 763864 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 763864' 00:09:43.490 killing process with pid 763864 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 763864 00:09:43.490 13:14:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 763864 00:09:43.490 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.490 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.490 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.490 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:43.490 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:43.490 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.490 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.491 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.491 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.491 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.491 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.491 13:14:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:44.874 00:09:44.874 real 0m35.420s 00:09:44.874 user 0m45.990s 00:09:44.874 sys 0m12.268s 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:44.874 ************************************ 00:09:44.874 END TEST nvmf_zcopy 00:09:44.874 ************************************ 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.874 ************************************ 00:09:44.874 START TEST nvmf_nmic 00:09:44.874 ************************************ 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:44.874 * Looking for test storage... 00:09:44.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.874 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:45.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.136 --rc genhtml_branch_coverage=1 00:09:45.136 --rc genhtml_function_coverage=1 00:09:45.136 --rc genhtml_legend=1 00:09:45.136 --rc geninfo_all_blocks=1 00:09:45.136 --rc geninfo_unexecuted_blocks=1 00:09:45.136 00:09:45.136 ' 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:45.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.136 --rc genhtml_branch_coverage=1 00:09:45.136 --rc genhtml_function_coverage=1 00:09:45.136 --rc genhtml_legend=1 00:09:45.136 --rc geninfo_all_blocks=1 00:09:45.136 --rc geninfo_unexecuted_blocks=1 00:09:45.136 00:09:45.136 ' 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:45.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.136 --rc genhtml_branch_coverage=1 00:09:45.136 --rc genhtml_function_coverage=1 00:09:45.136 --rc genhtml_legend=1 00:09:45.136 --rc geninfo_all_blocks=1 00:09:45.136 --rc geninfo_unexecuted_blocks=1 00:09:45.136 00:09:45.136 ' 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:45.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.136 --rc genhtml_branch_coverage=1 00:09:45.136 --rc genhtml_function_coverage=1 00:09:45.136 --rc genhtml_legend=1 00:09:45.136 --rc geninfo_all_blocks=1 00:09:45.136 --rc geninfo_unexecuted_blocks=1 00:09:45.136 00:09:45.136 ' 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:45.136 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.137 13:14:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:53.278 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:53.278 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:53.278 Found net devices under 0000:31:00.0: cvl_0_0 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:53.278 Found net devices under 0000:31:00.1: cvl_0_1 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:53.278 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:53.537 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:53.537 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:53.537 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:53.537 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:53.537 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:53.537 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:53.537 13:14:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:53.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.535 ms 00:09:53.537 00:09:53.537 --- 10.0.0.2 ping statistics --- 00:09:53.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.537 rtt min/avg/max/mdev = 0.535/0.535/0.535/0.000 ms 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:53.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:09:53.537 00:09:53.537 --- 10.0.0.1 ping statistics --- 00:09:53.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.537 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=773498 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 773498 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 773498 ']' 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.537 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:53.796 [2024-12-05 13:14:16.135279] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:09:53.796 [2024-12-05 13:14:16.135353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.796 [2024-12-05 13:14:16.227329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:53.796 [2024-12-05 13:14:16.270599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.796 [2024-12-05 13:14:16.270637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.796 [2024-12-05 13:14:16.270645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.796 [2024-12-05 13:14:16.270652] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.796 [2024-12-05 13:14:16.270658] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.796 [2024-12-05 13:14:16.272313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.796 [2024-12-05 13:14:16.272432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:53.796 [2024-12-05 13:14:16.272590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.796 [2024-12-05 13:14:16.272590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 [2024-12-05 13:14:16.988277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 Malloc0 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 [2024-12-05 13:14:17.057236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:54.737 test case1: single bdev can't be used in multiple subsystems 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.737 [2024-12-05 13:14:17.093150] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:54.737 [2024-12-05 13:14:17.093168] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:54.737 [2024-12-05 13:14:17.093180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.737 request: 00:09:54.737 { 00:09:54.737 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:54.737 "namespace": { 00:09:54.737 "bdev_name": "Malloc0", 00:09:54.737 "no_auto_visible": false, 00:09:54.737 "hide_metadata": false 00:09:54.737 }, 00:09:54.737 "method": "nvmf_subsystem_add_ns", 00:09:54.737 "req_id": 1 00:09:54.737 } 00:09:54.737 Got JSON-RPC error response 00:09:54.737 response: 00:09:54.737 { 00:09:54.737 "code": -32602, 00:09:54.737 "message": "Invalid parameters" 00:09:54.737 } 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:54.737 Adding namespace failed - expected result. 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:54.737 test case2: host connect to nvmf target in multiple paths 00:09:54.737 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:54.738 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.738 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.738 [2024-12-05 13:14:17.105295] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:54.738 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.738 13:14:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:56.120 13:14:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:58.031 13:14:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.031 13:14:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:58.031 13:14:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.031 13:14:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:58.031 13:14:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:59.967 13:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:59.967 13:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:59.967 13:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.967 13:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:59.967 13:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.967 13:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:59.967 13:14:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:59.967 [global] 00:09:59.967 thread=1 00:09:59.967 invalidate=1 00:09:59.967 rw=write 00:09:59.967 time_based=1 00:09:59.967 runtime=1 00:09:59.967 ioengine=libaio 00:09:59.967 direct=1 00:09:59.967 bs=4096 00:09:59.967 iodepth=1 00:09:59.967 norandommap=0 00:09:59.967 numjobs=1 00:09:59.967 00:09:59.967 verify_dump=1 00:09:59.967 verify_backlog=512 00:09:59.967 verify_state_save=0 00:09:59.967 do_verify=1 00:09:59.967 verify=crc32c-intel 00:09:59.967 [job0] 00:09:59.967 filename=/dev/nvme0n1 00:09:59.967 Could not set queue depth (nvme0n1) 00:10:00.228 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:00.228 fio-3.35 00:10:00.228 Starting 1 thread 00:10:01.171 00:10:01.171 job0: (groupid=0, jobs=1): err= 0: pid=774855: Thu Dec 5 13:14:23 2024 00:10:01.171 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:01.171 slat (nsec): min=26126, max=59153, avg=27188.11, stdev=2336.39 00:10:01.171 clat (usec): min=758, max=1195, avg=976.50, stdev=59.43 00:10:01.171 lat (usec): min=785, max=1221, avg=1003.68, stdev=59.37 00:10:01.171 clat percentiles (usec): 00:10:01.171 | 1.00th=[ 807], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 938], 00:10:01.171 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 979], 60.00th=[ 996], 00:10:01.171 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1037], 95.00th=[ 1057], 00:10:01.171 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:10:01.171 | 99.99th=[ 1188] 00:10:01.171 write: IOPS=714, BW=2857KiB/s (2926kB/s)(2860KiB/1001msec); 0 zone resets 00:10:01.171 slat (usec): min=9, max=27099, avg=67.70, stdev=1012.39 00:10:01.171 clat (usec): min=246, max=811, avg=599.07, stdev=96.65 00:10:01.171 lat (usec): min=257, max=27786, avg=666.77, stdev=1020.72 00:10:01.171 clat percentiles (usec): 00:10:01.171 | 1.00th=[ 355], 5.00th=[ 412], 10.00th=[ 474], 20.00th=[ 510], 00:10:01.171 | 30.00th=[ 562], 40.00th=[ 586], 50.00th=[ 603], 60.00th=[ 627], 00:10:01.171 | 70.00th=[ 668], 80.00th=[ 685], 90.00th=[ 709], 95.00th=[ 734], 00:10:01.171 | 99.00th=[ 775], 99.50th=[ 791], 99.90th=[ 816], 99.95th=[ 816], 00:10:01.171 | 99.99th=[ 816] 00:10:01.171 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.171 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.171 lat (usec) : 250=0.08%, 500=9.37%, 750=47.03%, 1000=28.93% 00:10:01.171 lat (msec) : 2=14.59% 00:10:01.171 cpu : usr=2.30%, sys=3.20%, ctx=1230, majf=0, minf=1 00:10:01.171 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.171 issued rwts: total=512,715,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.171 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.171 00:10:01.171 Run status group 0 (all jobs): 00:10:01.171 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:10:01.171 WRITE: bw=2857KiB/s (2926kB/s), 2857KiB/s-2857KiB/s (2926kB/s-2926kB/s), io=2860KiB (2929kB), run=1001-1001msec 00:10:01.171 00:10:01.171 Disk stats (read/write): 00:10:01.171 nvme0n1: ios=564/559, merge=0/0, ticks=903/307, in_queue=1210, util=98.90% 00:10:01.171 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:01.432 rmmod nvme_tcp 00:10:01.432 rmmod nvme_fabrics 00:10:01.432 rmmod nvme_keyring 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 773498 ']' 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 773498 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 773498 ']' 00:10:01.432 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 773498 00:10:01.693 13:14:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773498 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773498' 00:10:01.693 killing process with pid 773498 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 773498 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 773498 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:01.693 13:14:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:04.240 00:10:04.240 real 0m19.030s 00:10:04.240 user 0m48.815s 00:10:04.240 sys 0m7.367s 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:04.240 ************************************ 00:10:04.240 END TEST nvmf_nmic 00:10:04.240 ************************************ 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:04.240 ************************************ 00:10:04.240 START TEST nvmf_fio_target 00:10:04.240 ************************************ 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:04.240 * Looking for test storage... 00:10:04.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:04.240 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:04.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.240 --rc genhtml_branch_coverage=1 00:10:04.240 --rc genhtml_function_coverage=1 00:10:04.240 --rc genhtml_legend=1 00:10:04.240 --rc geninfo_all_blocks=1 00:10:04.241 --rc geninfo_unexecuted_blocks=1 00:10:04.241 00:10:04.241 ' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:04.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.241 --rc genhtml_branch_coverage=1 00:10:04.241 --rc genhtml_function_coverage=1 00:10:04.241 --rc genhtml_legend=1 00:10:04.241 --rc geninfo_all_blocks=1 00:10:04.241 --rc geninfo_unexecuted_blocks=1 00:10:04.241 00:10:04.241 ' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:04.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.241 --rc genhtml_branch_coverage=1 00:10:04.241 --rc genhtml_function_coverage=1 00:10:04.241 --rc genhtml_legend=1 00:10:04.241 --rc geninfo_all_blocks=1 00:10:04.241 --rc geninfo_unexecuted_blocks=1 00:10:04.241 00:10:04.241 ' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:04.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:04.241 --rc genhtml_branch_coverage=1 00:10:04.241 --rc genhtml_function_coverage=1 00:10:04.241 --rc genhtml_legend=1 00:10:04.241 --rc geninfo_all_blocks=1 00:10:04.241 --rc geninfo_unexecuted_blocks=1 00:10:04.241 00:10:04.241 ' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:04.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:04.241 13:14:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.388 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:12.389 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:12.389 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:12.389 Found net devices under 0000:31:00.0: cvl_0_0 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:12.389 Found net devices under 0000:31:00.1: cvl_0_1 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.389 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:10:12.651 00:10:12.651 --- 10.0.0.2 ping statistics --- 00:10:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.651 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:10:12.651 00:10:12.651 --- 10.0.0.1 ping statistics --- 00:10:12.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.651 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.651 13:14:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=780050 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 780050 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 780050 ']' 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.651 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.651 [2024-12-05 13:14:35.102077] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:10:12.651 [2024-12-05 13:14:35.102149] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.651 [2024-12-05 13:14:35.196199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.912 [2024-12-05 13:14:35.237660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.912 [2024-12-05 13:14:35.237699] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.912 [2024-12-05 13:14:35.237707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.912 [2024-12-05 13:14:35.237714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.912 [2024-12-05 13:14:35.237720] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.912 [2024-12-05 13:14:35.239350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.912 [2024-12-05 13:14:35.239468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.912 [2024-12-05 13:14:35.239625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.912 [2024-12-05 13:14:35.239625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:13.485 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.485 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:13.485 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.485 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.485 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.485 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.485 13:14:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:13.746 [2024-12-05 13:14:36.099979] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.746 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.007 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:14.007 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.007 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:14.007 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.268 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:14.268 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.529 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:14.529 13:14:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:14.791 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:14.791 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:14.791 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.052 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:15.052 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:15.314 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:15.314 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:15.314 13:14:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:15.575 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:15.575 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.836 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:15.836 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:16.097 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.097 [2024-12-05 13:14:38.572733] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.098 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:16.359 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:16.620 13:14:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.009 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:18.009 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:18.009 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.009 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:18.009 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:18.009 13:14:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:20.555 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:20.555 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:20.555 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:20.555 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:20.555 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:20.555 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:20.555 13:14:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:20.555 [global] 00:10:20.555 thread=1 00:10:20.555 invalidate=1 00:10:20.555 rw=write 00:10:20.555 time_based=1 00:10:20.555 runtime=1 00:10:20.555 ioengine=libaio 00:10:20.555 direct=1 00:10:20.555 bs=4096 00:10:20.555 iodepth=1 00:10:20.555 norandommap=0 00:10:20.555 numjobs=1 00:10:20.555 00:10:20.555 verify_dump=1 00:10:20.555 verify_backlog=512 00:10:20.555 verify_state_save=0 00:10:20.555 do_verify=1 00:10:20.555 verify=crc32c-intel 00:10:20.555 [job0] 00:10:20.555 filename=/dev/nvme0n1 00:10:20.555 [job1] 00:10:20.555 filename=/dev/nvme0n2 00:10:20.555 [job2] 00:10:20.555 filename=/dev/nvme0n3 00:10:20.555 [job3] 00:10:20.555 filename=/dev/nvme0n4 00:10:20.555 Could not set queue depth (nvme0n1) 00:10:20.555 Could not set queue depth (nvme0n2) 00:10:20.555 Could not set queue depth (nvme0n3) 00:10:20.555 Could not set queue depth (nvme0n4) 00:10:20.555 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.555 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.555 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.555 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:20.555 fio-3.35 00:10:20.555 Starting 4 threads 00:10:21.939 00:10:21.939 job0: (groupid=0, jobs=1): err= 0: pid=781787: Thu Dec 5 13:14:44 2024 00:10:21.939 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:21.939 slat (nsec): min=6581, max=64804, avg=27972.45, stdev=5205.82 00:10:21.939 clat (usec): min=226, max=1227, avg=899.94, stdev=171.69 00:10:21.939 lat (usec): min=233, max=1255, avg=927.92, stdev=172.45 00:10:21.939 clat percentiles (usec): 00:10:21.939 | 1.00th=[ 388], 5.00th=[ 635], 10.00th=[ 693], 20.00th=[ 742], 00:10:21.939 | 30.00th=[ 783], 40.00th=[ 881], 50.00th=[ 955], 60.00th=[ 996], 00:10:21.939 | 70.00th=[ 1020], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:10:21.939 | 99.00th=[ 1156], 99.50th=[ 1221], 99.90th=[ 1221], 99.95th=[ 1221], 00:10:21.939 | 99.99th=[ 1221] 00:10:21.939 write: IOPS=818, BW=3273KiB/s (3351kB/s)(3276KiB/1001msec); 0 zone resets 00:10:21.939 slat (nsec): min=9407, max=70766, avg=33718.87, stdev=9816.71 00:10:21.939 clat (usec): min=116, max=1040, avg=594.59, stdev=169.09 00:10:21.939 lat (usec): min=126, max=1077, avg=628.31, stdev=172.38 00:10:21.939 clat percentiles (usec): 00:10:21.939 | 1.00th=[ 210], 5.00th=[ 302], 10.00th=[ 363], 20.00th=[ 457], 00:10:21.939 | 30.00th=[ 519], 40.00th=[ 562], 50.00th=[ 594], 60.00th=[ 635], 00:10:21.939 | 70.00th=[ 668], 80.00th=[ 734], 90.00th=[ 840], 95.00th=[ 881], 00:10:21.939 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 1045], 99.95th=[ 1045], 00:10:21.939 | 99.99th=[ 1045] 00:10:21.939 bw ( KiB/s): min= 4096, max= 4096, per=40.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.939 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.939 lat (usec) : 250=1.20%, 500=15.10%, 750=43.35%, 1000=25.62% 00:10:21.939 lat (msec) : 2=14.73% 00:10:21.939 cpu : usr=4.00%, sys=4.40%, ctx=1332, majf=0, minf=1 00:10:21.939 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.939 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.939 issued rwts: total=512,819,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.939 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.939 job1: (groupid=0, jobs=1): err= 0: pid=781788: Thu Dec 5 13:14:44 2024 00:10:21.939 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1010msec) 00:10:21.939 slat (nsec): min=26054, max=26642, avg=26287.59, stdev=135.47 00:10:21.939 clat (usec): min=1136, max=42096, avg=39540.89, stdev=9897.11 00:10:21.939 lat (usec): min=1162, max=42123, avg=39567.18, stdev=9897.09 00:10:21.939 clat percentiles (usec): 00:10:21.939 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[41681], 20.00th=[41681], 00:10:21.939 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:21.939 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:21.939 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:21.939 | 99.99th=[42206] 00:10:21.939 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:10:21.939 slat (nsec): min=9960, max=72535, avg=29828.35, stdev=10271.72 00:10:21.939 clat (usec): min=244, max=852, avg=619.58, stdev=111.40 00:10:21.939 lat (usec): min=255, max=915, avg=649.41, stdev=116.78 00:10:21.939 clat percentiles (usec): 00:10:21.939 | 1.00th=[ 355], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 529], 00:10:21.939 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 668], 00:10:21.939 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 742], 95.00th=[ 775], 00:10:21.939 | 99.00th=[ 840], 99.50th=[ 848], 99.90th=[ 857], 99.95th=[ 857], 00:10:21.939 | 99.99th=[ 857] 00:10:21.939 bw ( KiB/s): min= 4096, max= 4096, per=40.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.940 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.940 lat (usec) : 250=0.19%, 500=15.12%, 750=73.53%, 1000=7.94% 00:10:21.940 lat (msec) : 2=0.19%, 50=3.02% 00:10:21.940 cpu : usr=0.99%, sys=1.19%, ctx=531, majf=0, minf=1 00:10:21.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.940 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.940 job2: (groupid=0, jobs=1): err= 0: pid=781789: Thu Dec 5 13:14:44 2024 00:10:21.940 read: IOPS=17, BW=70.7KiB/s (72.4kB/s)(72.0KiB/1018msec) 00:10:21.940 slat (nsec): min=27059, max=46694, avg=28713.33, stdev=4517.26 00:10:21.940 clat (usec): min=974, max=42173, avg=37275.25, stdev=13186.48 00:10:21.940 lat (usec): min=1003, max=42201, avg=37303.96, stdev=13186.65 00:10:21.940 clat percentiles (usec): 00:10:21.940 | 1.00th=[ 971], 5.00th=[ 971], 10.00th=[ 1106], 20.00th=[41157], 00:10:21.940 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:21.940 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:21.940 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:21.940 | 99.99th=[42206] 00:10:21.940 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:10:21.940 slat (nsec): min=9343, max=54433, avg=30475.97, stdev=10191.90 00:10:21.940 clat (usec): min=302, max=1005, avg=638.59, stdev=117.85 00:10:21.940 lat (usec): min=315, max=1040, avg=669.07, stdev=123.22 00:10:21.940 clat percentiles (usec): 00:10:21.940 | 1.00th=[ 343], 5.00th=[ 412], 10.00th=[ 478], 20.00th=[ 545], 00:10:21.940 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 676], 00:10:21.940 | 70.00th=[ 709], 80.00th=[ 742], 90.00th=[ 766], 95.00th=[ 799], 00:10:21.940 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 1004], 99.95th=[ 1004], 00:10:21.940 | 99.99th=[ 1004] 00:10:21.940 bw ( KiB/s): min= 4096, max= 4096, per=40.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.940 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.940 lat (usec) : 500=13.02%, 750=66.60%, 1000=16.98% 00:10:21.940 lat (msec) : 2=0.38%, 50=3.02% 00:10:21.940 cpu : usr=0.98%, sys=1.97%, ctx=530, majf=0, minf=2 00:10:21.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.940 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.940 job3: (groupid=0, jobs=1): err= 0: pid=781790: Thu Dec 5 13:14:44 2024 00:10:21.940 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:21.940 slat (nsec): min=7970, max=46826, avg=28081.24, stdev=3279.19 00:10:21.940 clat (usec): min=556, max=1470, avg=968.75, stdev=93.89 00:10:21.940 lat (usec): min=585, max=1497, avg=996.83, stdev=94.32 00:10:21.940 clat percentiles (usec): 00:10:21.940 | 1.00th=[ 717], 5.00th=[ 807], 10.00th=[ 857], 20.00th=[ 914], 00:10:21.940 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 971], 60.00th=[ 988], 00:10:21.940 | 70.00th=[ 1004], 80.00th=[ 1029], 90.00th=[ 1057], 95.00th=[ 1106], 00:10:21.940 | 99.00th=[ 1221], 99.50th=[ 1369], 99.90th=[ 1467], 99.95th=[ 1467], 00:10:21.940 | 99.99th=[ 1467] 00:10:21.940 write: IOPS=730, BW=2921KiB/s (2991kB/s)(2924KiB/1001msec); 0 zone resets 00:10:21.940 slat (nsec): min=9717, max=56223, avg=31643.56, stdev=10742.37 00:10:21.940 clat (usec): min=163, max=1329, avg=624.45, stdev=132.16 00:10:21.940 lat (usec): min=175, max=1369, avg=656.10, stdev=137.08 00:10:21.940 clat percentiles (usec): 00:10:21.940 | 1.00th=[ 334], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 510], 00:10:21.940 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 627], 60.00th=[ 668], 00:10:21.940 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 816], 00:10:21.940 | 99.00th=[ 947], 99.50th=[ 963], 99.90th=[ 1336], 99.95th=[ 1336], 00:10:21.940 | 99.99th=[ 1336] 00:10:21.940 bw ( KiB/s): min= 4096, max= 4096, per=40.50%, avg=4096.00, stdev= 0.00, samples=1 00:10:21.940 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:21.940 lat (usec) : 250=0.24%, 500=10.30%, 750=38.86%, 1000=36.85% 00:10:21.940 lat (msec) : 2=13.76% 00:10:21.940 cpu : usr=3.40%, sys=4.10%, ctx=1244, majf=0, minf=1 00:10:21.940 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:21.940 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.940 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.940 issued rwts: total=512,731,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:21.940 00:10:21.940 Run status group 0 (all jobs): 00:10:21.940 READ: bw=4161KiB/s (4261kB/s), 67.3KiB/s-2046KiB/s (68.9kB/s-2095kB/s), io=4236KiB (4338kB), run=1001-1018msec 00:10:21.940 WRITE: bw=9.88MiB/s (10.4MB/s), 2012KiB/s-3273KiB/s (2060kB/s-3351kB/s), io=10.1MiB (10.5MB), run=1001-1018msec 00:10:21.940 00:10:21.940 Disk stats (read/write): 00:10:21.940 nvme0n1: ios=564/536, merge=0/0, ticks=1138/242, in_queue=1380, util=96.29% 00:10:21.940 nvme0n2: ios=62/512, merge=0/0, ticks=1102/308, in_queue=1410, util=96.63% 00:10:21.940 nvme0n3: ios=13/512, merge=0/0, ticks=461/275, in_queue=736, util=88.35% 00:10:21.940 nvme0n4: ios=545/512, merge=0/0, ticks=1057/254, in_queue=1311, util=96.78% 00:10:21.940 13:14:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:21.940 [global] 00:10:21.940 thread=1 00:10:21.940 invalidate=1 00:10:21.940 rw=randwrite 00:10:21.940 time_based=1 00:10:21.940 runtime=1 00:10:21.940 ioengine=libaio 00:10:21.940 direct=1 00:10:21.940 bs=4096 00:10:21.940 iodepth=1 00:10:21.940 norandommap=0 00:10:21.940 numjobs=1 00:10:21.940 00:10:21.940 verify_dump=1 00:10:21.940 verify_backlog=512 00:10:21.940 verify_state_save=0 00:10:21.940 do_verify=1 00:10:21.940 verify=crc32c-intel 00:10:21.940 [job0] 00:10:21.940 filename=/dev/nvme0n1 00:10:21.940 [job1] 00:10:21.940 filename=/dev/nvme0n2 00:10:21.940 [job2] 00:10:21.940 filename=/dev/nvme0n3 00:10:21.940 [job3] 00:10:21.940 filename=/dev/nvme0n4 00:10:21.940 Could not set queue depth (nvme0n1) 00:10:21.940 Could not set queue depth (nvme0n2) 00:10:21.940 Could not set queue depth (nvme0n3) 00:10:21.940 Could not set queue depth (nvme0n4) 00:10:22.200 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.200 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.200 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.200 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:22.200 fio-3.35 00:10:22.200 Starting 4 threads 00:10:23.580 00:10:23.580 job0: (groupid=0, jobs=1): err= 0: pid=782315: Thu Dec 5 13:14:45 2024 00:10:23.580 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:23.580 slat (nsec): min=25554, max=29673, avg=27997.29, stdev=300.77 00:10:23.580 clat (usec): min=794, max=41641, avg=1361.48, stdev=3951.00 00:10:23.581 lat (usec): min=823, max=41668, avg=1389.48, stdev=3950.90 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[ 832], 5.00th=[ 889], 10.00th=[ 914], 20.00th=[ 930], 00:10:23.581 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 979], 00:10:23.581 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:10:23.581 | 99.00th=[ 1483], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:10:23.581 | 99.99th=[41681] 00:10:23.581 write: IOPS=549, BW=2198KiB/s (2251kB/s)(2200KiB/1001msec); 0 zone resets 00:10:23.581 slat (nsec): min=9429, max=52752, avg=30925.31, stdev=9523.48 00:10:23.581 clat (usec): min=156, max=1018, avg=472.87, stdev=141.70 00:10:23.581 lat (usec): min=166, max=1053, avg=503.79, stdev=143.18 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[ 200], 5.00th=[ 251], 10.00th=[ 310], 20.00th=[ 347], 00:10:23.581 | 30.00th=[ 388], 40.00th=[ 429], 50.00th=[ 465], 60.00th=[ 502], 00:10:23.581 | 70.00th=[ 545], 80.00th=[ 578], 90.00th=[ 644], 95.00th=[ 693], 00:10:23.581 | 99.00th=[ 914], 99.50th=[ 971], 99.90th=[ 1020], 99.95th=[ 1020], 00:10:23.581 | 99.99th=[ 1020] 00:10:23.581 bw ( KiB/s): min= 4096, max= 4096, per=50.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.581 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.581 lat (usec) : 250=2.54%, 500=28.06%, 750=19.30%, 1000=39.45% 00:10:23.581 lat (msec) : 2=10.17%, 50=0.47% 00:10:23.581 cpu : usr=1.60%, sys=4.30%, ctx=1066, majf=0, minf=1 00:10:23.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 issued rwts: total=512,550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.581 job1: (groupid=0, jobs=1): err= 0: pid=782318: Thu Dec 5 13:14:45 2024 00:10:23.581 read: IOPS=16, BW=66.3KiB/s (67.9kB/s)(68.0KiB/1025msec) 00:10:23.581 slat (nsec): min=26705, max=27548, avg=27095.18, stdev=260.10 00:10:23.581 clat (usec): min=40950, max=42142, avg=41691.98, stdev=400.71 00:10:23.581 lat (usec): min=40977, max=42170, avg=41719.08, stdev=400.74 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:23.581 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:10:23.581 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:23.581 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:23.581 | 99.99th=[42206] 00:10:23.581 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:10:23.581 slat (nsec): min=8866, max=69646, avg=31269.52, stdev=8668.53 00:10:23.581 clat (usec): min=144, max=1447, avg=576.84, stdev=151.61 00:10:23.581 lat (usec): min=155, max=1480, avg=608.11, stdev=154.24 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[ 281], 5.00th=[ 310], 10.00th=[ 392], 20.00th=[ 449], 00:10:23.581 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 611], 00:10:23.581 | 70.00th=[ 644], 80.00th=[ 701], 90.00th=[ 758], 95.00th=[ 840], 00:10:23.581 | 99.00th=[ 938], 99.50th=[ 971], 99.90th=[ 1450], 99.95th=[ 1450], 00:10:23.581 | 99.99th=[ 1450] 00:10:23.581 bw ( KiB/s): min= 4096, max= 4096, per=50.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.581 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.581 lat (usec) : 250=0.76%, 500=25.33%, 750=59.74%, 1000=10.78% 00:10:23.581 lat (msec) : 2=0.19%, 50=3.21% 00:10:23.581 cpu : usr=0.68%, sys=2.44%, ctx=530, majf=0, minf=1 00:10:23.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.581 job2: (groupid=0, jobs=1): err= 0: pid=782319: Thu Dec 5 13:14:45 2024 00:10:23.581 read: IOPS=151, BW=605KiB/s (619kB/s)(612KiB/1012msec) 00:10:23.581 slat (nsec): min=24753, max=90818, avg=27548.67, stdev=5210.79 00:10:23.581 clat (usec): min=475, max=42062, avg=5554.88, stdev=12902.33 00:10:23.581 lat (usec): min=502, max=42087, avg=5582.43, stdev=12901.53 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[ 537], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 947], 00:10:23.581 | 30.00th=[ 971], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:10:23.581 | 70.00th=[ 1057], 80.00th=[ 1123], 90.00th=[41681], 95.00th=[42206], 00:10:23.581 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:23.581 | 99.99th=[42206] 00:10:23.581 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:23.581 slat (nsec): min=9282, max=85670, avg=23115.33, stdev=11331.63 00:10:23.581 clat (usec): min=97, max=676, avg=275.83, stdev=143.47 00:10:23.581 lat (usec): min=107, max=721, avg=298.94, stdev=151.14 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[ 100], 5.00th=[ 105], 10.00th=[ 111], 20.00th=[ 121], 00:10:23.581 | 30.00th=[ 141], 40.00th=[ 239], 50.00th=[ 273], 60.00th=[ 297], 00:10:23.581 | 70.00th=[ 351], 80.00th=[ 408], 90.00th=[ 482], 95.00th=[ 537], 00:10:23.581 | 99.00th=[ 611], 99.50th=[ 627], 99.90th=[ 676], 99.95th=[ 676], 00:10:23.581 | 99.99th=[ 676] 00:10:23.581 bw ( KiB/s): min= 4096, max= 4096, per=50.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.581 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.581 lat (usec) : 100=0.75%, 250=31.73%, 500=37.89%, 750=7.07%, 1000=8.87% 00:10:23.581 lat (msec) : 2=11.13%, 50=2.56% 00:10:23.581 cpu : usr=1.19%, sys=1.38%, ctx=666, majf=0, minf=2 00:10:23.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 issued rwts: total=153,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.581 job3: (groupid=0, jobs=1): err= 0: pid=782320: Thu Dec 5 13:14:45 2024 00:10:23.581 read: IOPS=19, BW=78.2KiB/s (80.1kB/s)(80.0KiB/1023msec) 00:10:23.581 slat (nsec): min=26735, max=31699, avg=27345.70, stdev=1091.36 00:10:23.581 clat (usec): min=858, max=44046, avg=39148.88, stdev=9040.22 00:10:23.581 lat (usec): min=885, max=44078, avg=39176.22, stdev=9040.35 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[ 857], 5.00th=[ 857], 10.00th=[40633], 20.00th=[41157], 00:10:23.581 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:23.581 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:23.581 | 99.00th=[44303], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:23.581 | 99.99th=[44303] 00:10:23.581 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:10:23.581 slat (nsec): min=8826, max=53014, avg=30039.08, stdev=9086.05 00:10:23.581 clat (usec): min=142, max=883, avg=429.09, stdev=144.63 00:10:23.581 lat (usec): min=162, max=916, avg=459.13, stdev=146.38 00:10:23.581 clat percentiles (usec): 00:10:23.581 | 1.00th=[ 155], 5.00th=[ 217], 10.00th=[ 269], 20.00th=[ 297], 00:10:23.581 | 30.00th=[ 326], 40.00th=[ 367], 50.00th=[ 420], 60.00th=[ 465], 00:10:23.581 | 70.00th=[ 498], 80.00th=[ 562], 90.00th=[ 627], 95.00th=[ 676], 00:10:23.581 | 99.00th=[ 775], 99.50th=[ 799], 99.90th=[ 881], 99.95th=[ 881], 00:10:23.581 | 99.99th=[ 881] 00:10:23.581 bw ( KiB/s): min= 4096, max= 4096, per=50.32%, avg=4096.00, stdev= 0.00, samples=1 00:10:23.581 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:23.581 lat (usec) : 250=7.89%, 500=60.15%, 750=26.32%, 1000=2.07% 00:10:23.581 lat (msec) : 50=3.57% 00:10:23.581 cpu : usr=0.88%, sys=2.15%, ctx=533, majf=0, minf=1 00:10:23.581 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.581 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.581 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.581 00:10:23.581 Run status group 0 (all jobs): 00:10:23.581 READ: bw=2740KiB/s (2805kB/s), 66.3KiB/s-2046KiB/s (67.9kB/s-2095kB/s), io=2808KiB (2875kB), run=1001-1025msec 00:10:23.581 WRITE: bw=8140KiB/s (8336kB/s), 1998KiB/s-2198KiB/s (2046kB/s-2251kB/s), io=8344KiB (8544kB), run=1001-1025msec 00:10:23.581 00:10:23.581 Disk stats (read/write): 00:10:23.581 nvme0n1: ios=415/512, merge=0/0, ticks=902/228, in_queue=1130, util=97.39% 00:10:23.581 nvme0n2: ios=48/512, merge=0/0, ticks=541/218, in_queue=759, util=87.36% 00:10:23.581 nvme0n3: ios=148/512, merge=0/0, ticks=637/130, in_queue=767, util=88.38% 00:10:23.581 nvme0n4: ios=50/512, merge=0/0, ticks=704/164, in_queue=868, util=99.36% 00:10:23.581 13:14:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:23.581 [global] 00:10:23.581 thread=1 00:10:23.581 invalidate=1 00:10:23.581 rw=write 00:10:23.581 time_based=1 00:10:23.581 runtime=1 00:10:23.581 ioengine=libaio 00:10:23.581 direct=1 00:10:23.581 bs=4096 00:10:23.581 iodepth=128 00:10:23.581 norandommap=0 00:10:23.581 numjobs=1 00:10:23.581 00:10:23.582 verify_dump=1 00:10:23.582 verify_backlog=512 00:10:23.582 verify_state_save=0 00:10:23.582 do_verify=1 00:10:23.582 verify=crc32c-intel 00:10:23.582 [job0] 00:10:23.582 filename=/dev/nvme0n1 00:10:23.582 [job1] 00:10:23.582 filename=/dev/nvme0n2 00:10:23.582 [job2] 00:10:23.582 filename=/dev/nvme0n3 00:10:23.582 [job3] 00:10:23.582 filename=/dev/nvme0n4 00:10:23.582 Could not set queue depth (nvme0n1) 00:10:23.582 Could not set queue depth (nvme0n2) 00:10:23.582 Could not set queue depth (nvme0n3) 00:10:23.582 Could not set queue depth (nvme0n4) 00:10:23.842 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.842 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.842 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.842 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:23.842 fio-3.35 00:10:23.842 Starting 4 threads 00:10:25.331 00:10:25.331 job0: (groupid=0, jobs=1): err= 0: pid=782836: Thu Dec 5 13:14:47 2024 00:10:25.331 read: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec) 00:10:25.331 slat (nsec): min=935, max=19014k, avg=77063.34, stdev=599963.67 00:10:25.331 clat (usec): min=2466, max=53140, avg=10359.11, stdev=6188.44 00:10:25.331 lat (usec): min=2504, max=54982, avg=10436.18, stdev=6237.23 00:10:25.331 clat percentiles (usec): 00:10:25.331 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 5800], 20.00th=[ 6980], 00:10:25.331 | 30.00th=[ 7635], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9765], 00:10:25.331 | 70.00th=[10159], 80.00th=[11731], 90.00th=[13566], 95.00th=[23987], 00:10:25.331 | 99.00th=[35390], 99.50th=[39060], 99.90th=[53216], 99.95th=[53216], 00:10:25.331 | 99.99th=[53216] 00:10:25.331 write: IOPS=6877, BW=26.9MiB/s (28.2MB/s)(26.9MiB/1002msec); 0 zone resets 00:10:25.331 slat (nsec): min=1650, max=8696.8k, avg=63847.75, stdev=436287.57 00:10:25.331 clat (usec): min=465, max=32472, avg=8409.80, stdev=3388.08 00:10:25.331 lat (usec): min=634, max=32480, avg=8473.65, stdev=3413.03 00:10:25.331 clat percentiles (usec): 00:10:25.331 | 1.00th=[ 2507], 5.00th=[ 4047], 10.00th=[ 4883], 20.00th=[ 6063], 00:10:25.331 | 30.00th=[ 6849], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 8979], 00:10:25.331 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[12780], 00:10:25.331 | 99.00th=[26346], 99.50th=[31327], 99.90th=[32113], 99.95th=[32375], 00:10:25.331 | 99.99th=[32375] 00:10:25.331 bw ( KiB/s): min=25664, max=28440, per=30.51%, avg=27052.00, stdev=1962.93, samples=2 00:10:25.331 iops : min= 6416, max= 7110, avg=6763.00, stdev=490.73, samples=2 00:10:25.331 lat (usec) : 500=0.01%, 750=0.07%, 1000=0.01% 00:10:25.331 lat (msec) : 2=0.24%, 4=3.62%, 10=70.04%, 20=22.06%, 50=3.84% 00:10:25.331 lat (msec) : 100=0.13% 00:10:25.331 cpu : usr=4.80%, sys=7.49%, ctx=501, majf=0, minf=1 00:10:25.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:25.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.331 issued rwts: total=6656,6891,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.331 job1: (groupid=0, jobs=1): err= 0: pid=782837: Thu Dec 5 13:14:47 2024 00:10:25.331 read: IOPS=4618, BW=18.0MiB/s (18.9MB/s)(18.2MiB/1007msec) 00:10:25.331 slat (nsec): min=967, max=20481k, avg=89848.32, stdev=758678.42 00:10:25.331 clat (usec): min=2552, max=44369, avg=12781.85, stdev=7162.88 00:10:25.331 lat (usec): min=2556, max=44397, avg=12871.69, stdev=7221.03 00:10:25.331 clat percentiles (usec): 00:10:25.331 | 1.00th=[ 3949], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 6849], 00:10:25.331 | 30.00th=[ 8094], 40.00th=[ 8848], 50.00th=[10552], 60.00th=[12518], 00:10:25.331 | 70.00th=[15270], 80.00th=[17433], 90.00th=[22938], 95.00th=[26870], 00:10:25.331 | 99.00th=[38536], 99.50th=[41681], 99.90th=[44303], 99.95th=[44303], 00:10:25.331 | 99.99th=[44303] 00:10:25.331 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:10:25.331 slat (nsec): min=1657, max=18038k, avg=95557.83, stdev=700837.51 00:10:25.331 clat (usec): min=1087, max=78071, avg=13303.67, stdev=14250.35 00:10:25.331 lat (usec): min=1121, max=78304, avg=13399.22, stdev=14334.94 00:10:25.331 clat percentiles (usec): 00:10:25.331 | 1.00th=[ 2245], 5.00th=[ 2868], 10.00th=[ 4146], 20.00th=[ 5276], 00:10:25.331 | 30.00th=[ 6325], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 8717], 00:10:25.331 | 70.00th=[11207], 80.00th=[19530], 90.00th=[32375], 95.00th=[43779], 00:10:25.331 | 99.00th=[73925], 99.50th=[74974], 99.90th=[78119], 99.95th=[78119], 00:10:25.331 | 99.99th=[78119] 00:10:25.331 bw ( KiB/s): min=18512, max=21776, per=22.72%, avg=20144.00, stdev=2308.00, samples=2 00:10:25.331 iops : min= 4628, max= 5444, avg=5036.00, stdev=577.00, samples=2 00:10:25.331 lat (msec) : 2=0.44%, 4=5.36%, 10=49.66%, 20=27.80%, 50=14.47% 00:10:25.331 lat (msec) : 100=2.27% 00:10:25.331 cpu : usr=3.68%, sys=5.77%, ctx=353, majf=0, minf=1 00:10:25.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:25.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.331 issued rwts: total=4651,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.331 job2: (groupid=0, jobs=1): err= 0: pid=782838: Thu Dec 5 13:14:47 2024 00:10:25.331 read: IOPS=5704, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1008msec) 00:10:25.331 slat (usec): min=2, max=16047, avg=77.21, stdev=652.36 00:10:25.331 clat (usec): min=1979, max=54054, avg=11728.72, stdev=5097.03 00:10:25.331 lat (usec): min=1986, max=54061, avg=11805.93, stdev=5141.51 00:10:25.331 clat percentiles (usec): 00:10:25.331 | 1.00th=[ 4293], 5.00th=[ 6718], 10.00th=[ 7046], 20.00th=[ 7898], 00:10:25.331 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[10159], 60.00th=[11076], 00:10:25.331 | 70.00th=[13304], 80.00th=[16057], 90.00th=[18482], 95.00th=[22152], 00:10:25.331 | 99.00th=[24773], 99.50th=[34866], 99.90th=[48497], 99.95th=[48497], 00:10:25.331 | 99.99th=[54264] 00:10:25.331 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:10:25.331 slat (nsec): min=1671, max=18611k, avg=68076.81, stdev=669680.65 00:10:25.331 clat (usec): min=632, max=32342, avg=9804.57, stdev=5136.40 00:10:25.331 lat (usec): min=703, max=32362, avg=9872.65, stdev=5191.06 00:10:25.331 clat percentiles (usec): 00:10:25.331 | 1.00th=[ 2343], 5.00th=[ 3752], 10.00th=[ 4883], 20.00th=[ 5800], 00:10:25.331 | 30.00th=[ 6718], 40.00th=[ 7439], 50.00th=[ 8356], 60.00th=[ 9765], 00:10:25.331 | 70.00th=[11076], 80.00th=[13304], 90.00th=[16450], 95.00th=[20317], 00:10:25.331 | 99.00th=[26346], 99.50th=[27657], 99.90th=[31589], 99.95th=[31589], 00:10:25.331 | 99.99th=[32375] 00:10:25.331 bw ( KiB/s): min=23120, max=25952, per=27.67%, avg=24536.00, stdev=2002.53, samples=2 00:10:25.331 iops : min= 5780, max= 6488, avg=6134.00, stdev=500.63, samples=2 00:10:25.331 lat (usec) : 750=0.02%, 1000=0.01% 00:10:25.331 lat (msec) : 2=0.45%, 4=2.30%, 10=52.23%, 20=38.68%, 50=6.31% 00:10:25.331 lat (msec) : 100=0.01% 00:10:25.331 cpu : usr=4.47%, sys=8.24%, ctx=254, majf=0, minf=1 00:10:25.331 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:25.331 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.331 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.331 issued rwts: total=5750,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.331 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.331 job3: (groupid=0, jobs=1): err= 0: pid=782839: Thu Dec 5 13:14:47 2024 00:10:25.332 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:10:25.332 slat (nsec): min=1073, max=22428k, avg=136202.49, stdev=1072435.61 00:10:25.332 clat (usec): min=1504, max=77421, avg=16984.50, stdev=12091.62 00:10:25.332 lat (usec): min=1512, max=77450, avg=17120.70, stdev=12204.60 00:10:25.332 clat percentiles (usec): 00:10:25.332 | 1.00th=[ 4752], 5.00th=[ 6783], 10.00th=[ 7701], 20.00th=[ 7963], 00:10:25.332 | 30.00th=[ 9372], 40.00th=[10945], 50.00th=[14091], 60.00th=[16909], 00:10:25.332 | 70.00th=[18220], 80.00th=[21627], 90.00th=[31065], 95.00th=[44827], 00:10:25.332 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[72877], 00:10:25.332 | 99.99th=[77071] 00:10:25.332 write: IOPS=4171, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1004msec); 0 zone resets 00:10:25.332 slat (nsec): min=1605, max=18587k, avg=95778.47, stdev=782909.99 00:10:25.332 clat (usec): min=541, max=70355, avg=13793.72, stdev=10628.77 00:10:25.332 lat (usec): min=742, max=70404, avg=13889.50, stdev=10702.41 00:10:25.332 clat percentiles (usec): 00:10:25.332 | 1.00th=[ 1303], 5.00th=[ 4293], 10.00th=[ 5604], 20.00th=[ 6456], 00:10:25.332 | 30.00th=[ 7701], 40.00th=[ 8848], 50.00th=[10028], 60.00th=[11994], 00:10:25.332 | 70.00th=[13435], 80.00th=[18482], 90.00th=[32637], 95.00th=[35914], 00:10:25.332 | 99.00th=[60556], 99.50th=[60556], 99.90th=[60556], 99.95th=[61080], 00:10:25.332 | 99.99th=[70779] 00:10:25.332 bw ( KiB/s): min=11024, max=21744, per=18.48%, avg=16384.00, stdev=7580.18, samples=2 00:10:25.332 iops : min= 2756, max= 5436, avg=4096.00, stdev=1895.05, samples=2 00:10:25.332 lat (usec) : 750=0.02%, 1000=0.17% 00:10:25.332 lat (msec) : 2=0.74%, 4=1.41%, 10=37.89%, 20=36.83%, 50=20.84% 00:10:25.332 lat (msec) : 100=2.10% 00:10:25.332 cpu : usr=3.09%, sys=5.98%, ctx=246, majf=0, minf=2 00:10:25.332 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:25.332 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.332 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:25.332 issued rwts: total=4096,4188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.332 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:25.332 00:10:25.332 Run status group 0 (all jobs): 00:10:25.332 READ: bw=82.0MiB/s (86.0MB/s), 15.9MiB/s-25.9MiB/s (16.7MB/s-27.2MB/s), io=82.6MiB (86.6MB), run=1002-1008msec 00:10:25.332 WRITE: bw=86.6MiB/s (90.8MB/s), 16.3MiB/s-26.9MiB/s (17.1MB/s-28.2MB/s), io=87.3MiB (91.5MB), run=1002-1008msec 00:10:25.332 00:10:25.332 Disk stats (read/write): 00:10:25.332 nvme0n1: ios=5567/5632, merge=0/0, ticks=42310/35877, in_queue=78187, util=84.47% 00:10:25.332 nvme0n2: ios=4110/4103, merge=0/0, ticks=49514/51953, in_queue=101467, util=87.77% 00:10:25.332 nvme0n3: ios=4672/4906, merge=0/0, ticks=54813/46296, in_queue=101109, util=95.25% 00:10:25.332 nvme0n4: ios=3641/3951, merge=0/0, ticks=43578/39702, in_queue=83280, util=97.01% 00:10:25.332 13:14:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:25.332 [global] 00:10:25.332 thread=1 00:10:25.332 invalidate=1 00:10:25.332 rw=randwrite 00:10:25.332 time_based=1 00:10:25.332 runtime=1 00:10:25.332 ioengine=libaio 00:10:25.332 direct=1 00:10:25.332 bs=4096 00:10:25.332 iodepth=128 00:10:25.332 norandommap=0 00:10:25.332 numjobs=1 00:10:25.332 00:10:25.332 verify_dump=1 00:10:25.332 verify_backlog=512 00:10:25.332 verify_state_save=0 00:10:25.332 do_verify=1 00:10:25.332 verify=crc32c-intel 00:10:25.332 [job0] 00:10:25.332 filename=/dev/nvme0n1 00:10:25.332 [job1] 00:10:25.332 filename=/dev/nvme0n2 00:10:25.332 [job2] 00:10:25.332 filename=/dev/nvme0n3 00:10:25.332 [job3] 00:10:25.332 filename=/dev/nvme0n4 00:10:25.332 Could not set queue depth (nvme0n1) 00:10:25.332 Could not set queue depth (nvme0n2) 00:10:25.332 Could not set queue depth (nvme0n3) 00:10:25.332 Could not set queue depth (nvme0n4) 00:10:25.610 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.610 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.610 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.610 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:25.610 fio-3.35 00:10:25.610 Starting 4 threads 00:10:26.994 00:10:26.995 job0: (groupid=0, jobs=1): err= 0: pid=783371: Thu Dec 5 13:14:49 2024 00:10:26.995 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:10:26.995 slat (nsec): min=1435, max=18504k, avg=183847.24, stdev=1173996.19 00:10:26.995 clat (msec): min=7, max=134, avg=20.34, stdev=17.48 00:10:26.995 lat (msec): min=7, max=134, avg=20.52, stdev=17.66 00:10:26.995 clat percentiles (msec): 00:10:26.995 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:10:26.995 | 30.00th=[ 12], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 18], 00:10:26.995 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 32], 95.00th=[ 55], 00:10:26.995 | 99.00th=[ 114], 99.50th=[ 126], 99.90th=[ 136], 99.95th=[ 136], 00:10:26.995 | 99.99th=[ 136] 00:10:26.995 write: IOPS=2661, BW=10.4MiB/s (10.9MB/s)(10.6MiB/1021msec); 0 zone resets 00:10:26.995 slat (nsec): min=1887, max=17655k, avg=190115.04, stdev=1071971.80 00:10:26.995 clat (msec): min=3, max=134, avg=28.49, stdev=24.85 00:10:26.995 lat (msec): min=3, max=134, avg=28.68, stdev=24.99 00:10:26.995 clat percentiles (msec): 00:10:26.995 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:10:26.995 | 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 24], 00:10:26.995 | 70.00th=[ 37], 80.00th=[ 44], 90.00th=[ 56], 95.00th=[ 81], 00:10:26.995 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 134], 00:10:26.995 | 99.99th=[ 136] 00:10:26.995 bw ( KiB/s): min= 5496, max=15216, per=18.84%, avg=10356.00, stdev=6873.08, samples=2 00:10:26.995 iops : min= 1374, max= 3804, avg=2589.00, stdev=1718.27, samples=2 00:10:26.995 lat (msec) : 4=0.23%, 10=12.60%, 20=49.88%, 50=27.31%, 100=7.45% 00:10:26.995 lat (msec) : 250=2.54% 00:10:26.995 cpu : usr=2.55%, sys=3.04%, ctx=203, majf=0, minf=1 00:10:26.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:26.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.995 issued rwts: total=2560,2717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.995 job1: (groupid=0, jobs=1): err= 0: pid=783372: Thu Dec 5 13:14:49 2024 00:10:26.995 read: IOPS=3020, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1017msec) 00:10:26.995 slat (nsec): min=962, max=18240k, avg=117021.78, stdev=967029.01 00:10:26.995 clat (usec): min=867, max=85738, avg=16846.85, stdev=11782.23 00:10:26.995 lat (usec): min=893, max=85742, avg=16963.87, stdev=11886.60 00:10:26.995 clat percentiles (usec): 00:10:26.995 | 1.00th=[ 1598], 5.00th=[ 3163], 10.00th=[ 4490], 20.00th=[ 6980], 00:10:26.995 | 30.00th=[ 7898], 40.00th=[11863], 50.00th=[14091], 60.00th=[16581], 00:10:26.995 | 70.00th=[22414], 80.00th=[26084], 90.00th=[32637], 95.00th=[41157], 00:10:26.995 | 99.00th=[47449], 99.50th=[47449], 99.90th=[85459], 99.95th=[85459], 00:10:26.995 | 99.99th=[85459] 00:10:26.995 write: IOPS=3523, BW=13.8MiB/s (14.4MB/s)(14.0MiB/1017msec); 0 zone resets 00:10:26.995 slat (nsec): min=1612, max=19791k, avg=142738.70, stdev=1075688.13 00:10:26.995 clat (usec): min=498, max=133474, avg=21595.94, stdev=25376.97 00:10:26.995 lat (usec): min=529, max=133481, avg=21738.68, stdev=25524.81 00:10:26.995 clat percentiles (usec): 00:10:26.995 | 1.00th=[ 1139], 5.00th=[ 1860], 10.00th=[ 2311], 20.00th=[ 3818], 00:10:26.995 | 30.00th=[ 5604], 40.00th=[ 7046], 50.00th=[ 12649], 60.00th=[ 15795], 00:10:26.995 | 70.00th=[ 27132], 80.00th=[ 36963], 90.00th=[ 46400], 95.00th=[ 76022], 00:10:26.995 | 99.00th=[123208], 99.50th=[127402], 99.90th=[133694], 99.95th=[133694], 00:10:26.995 | 99.99th=[133694] 00:10:26.995 bw ( KiB/s): min= 7160, max=20480, per=25.14%, avg=13820.00, stdev=9418.66, samples=2 00:10:26.995 iops : min= 1790, max= 5120, avg=3455.00, stdev=2354.67, samples=2 00:10:26.995 lat (usec) : 500=0.02%, 750=0.03%, 1000=0.27% 00:10:26.995 lat (msec) : 2=4.64%, 4=9.95%, 10=27.71%, 20=23.46%, 50=28.64% 00:10:26.995 lat (msec) : 100=3.67%, 250=1.62% 00:10:26.995 cpu : usr=2.36%, sys=4.23%, ctx=304, majf=0, minf=1 00:10:26.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:26.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.995 issued rwts: total=3072,3583,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.995 job2: (groupid=0, jobs=1): err= 0: pid=783373: Thu Dec 5 13:14:49 2024 00:10:26.995 read: IOPS=4839, BW=18.9MiB/s (19.8MB/s)(20.0MiB/1056msec) 00:10:26.995 slat (nsec): min=941, max=34666k, avg=113537.22, stdev=1170648.84 00:10:26.995 clat (usec): min=1867, max=86987, avg=16204.90, stdev=15769.07 00:10:26.995 lat (usec): min=1905, max=99504, avg=16318.43, stdev=15906.88 00:10:26.995 clat percentiles (usec): 00:10:26.995 | 1.00th=[ 4113], 5.00th=[ 4948], 10.00th=[ 5800], 20.00th=[ 6521], 00:10:26.995 | 30.00th=[ 7177], 40.00th=[ 8029], 50.00th=[ 8848], 60.00th=[10159], 00:10:26.995 | 70.00th=[14877], 80.00th=[24249], 90.00th=[44303], 95.00th=[55313], 00:10:26.995 | 99.00th=[71828], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:10:26.995 | 99.99th=[86508] 00:10:26.995 write: IOPS=4848, BW=18.9MiB/s (19.9MB/s)(20.0MiB/1056msec); 0 zone resets 00:10:26.995 slat (nsec): min=1568, max=26599k, avg=68889.65, stdev=729682.78 00:10:26.995 clat (usec): min=781, max=56766, avg=9987.49, stdev=7765.75 00:10:26.995 lat (usec): min=789, max=56788, avg=10056.38, stdev=7834.54 00:10:26.995 clat percentiles (usec): 00:10:26.995 | 1.00th=[ 1450], 5.00th=[ 3556], 10.00th=[ 3884], 20.00th=[ 4490], 00:10:26.995 | 30.00th=[ 5800], 40.00th=[ 6194], 50.00th=[ 6980], 60.00th=[ 8029], 00:10:26.995 | 70.00th=[11469], 80.00th=[14091], 90.00th=[22414], 95.00th=[29492], 00:10:26.995 | 99.00th=[38536], 99.50th=[38536], 99.90th=[44827], 99.95th=[44827], 00:10:26.995 | 99.99th=[56886] 00:10:26.995 bw ( KiB/s): min=10320, max=30640, per=37.26%, avg=20480.00, stdev=14368.41, samples=2 00:10:26.995 iops : min= 2580, max= 7660, avg=5120.00, stdev=3592.10, samples=2 00:10:26.995 lat (usec) : 1000=0.25% 00:10:26.995 lat (msec) : 2=0.67%, 4=5.12%, 10=57.36%, 20=19.81%, 50=14.11% 00:10:26.995 lat (msec) : 100=2.67% 00:10:26.995 cpu : usr=3.60%, sys=5.69%, ctx=295, majf=0, minf=1 00:10:26.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:26.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.995 issued rwts: total=5111,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.995 job3: (groupid=0, jobs=1): err= 0: pid=783374: Thu Dec 5 13:14:49 2024 00:10:26.995 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:10:26.995 slat (nsec): min=920, max=17041k, avg=112955.84, stdev=928771.01 00:10:26.995 clat (usec): min=750, max=52370, avg=15272.33, stdev=7564.51 00:10:26.995 lat (usec): min=755, max=52378, avg=15385.29, stdev=7643.38 00:10:26.995 clat percentiles (usec): 00:10:26.995 | 1.00th=[ 1844], 5.00th=[ 2540], 10.00th=[ 4228], 20.00th=[11076], 00:10:26.995 | 30.00th=[13042], 40.00th=[13698], 50.00th=[15139], 60.00th=[16319], 00:10:26.995 | 70.00th=[18482], 80.00th=[20579], 90.00th=[23725], 95.00th=[26084], 00:10:26.995 | 99.00th=[35914], 99.50th=[49546], 99.90th=[52167], 99.95th=[52167], 00:10:26.995 | 99.99th=[52167] 00:10:26.995 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.1MiB/1011msec); 0 zone resets 00:10:26.995 slat (nsec): min=1581, max=19752k, avg=191757.53, stdev=1124367.55 00:10:26.995 clat (usec): min=529, max=110326, avg=26339.87, stdev=23888.12 00:10:26.995 lat (usec): min=561, max=110336, avg=26531.62, stdev=24054.02 00:10:26.995 clat percentiles (usec): 00:10:26.995 | 1.00th=[ 1106], 5.00th=[ 5407], 10.00th=[ 8029], 20.00th=[ 10814], 00:10:26.995 | 30.00th=[ 11863], 40.00th=[ 12649], 50.00th=[ 15926], 60.00th=[ 17433], 00:10:26.995 | 70.00th=[ 29492], 80.00th=[ 43779], 90.00th=[ 58983], 95.00th=[ 84411], 00:10:26.995 | 99.00th=[102237], 99.50th=[102237], 99.90th=[110625], 99.95th=[110625], 00:10:26.995 | 99.99th=[110625] 00:10:26.995 bw ( KiB/s): min=12288, max=12288, per=22.36%, avg=12288.00, stdev= 0.00, samples=2 00:10:26.995 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:26.995 lat (usec) : 750=0.08%, 1000=0.41% 00:10:26.995 lat (msec) : 2=1.53%, 4=4.77%, 10=11.54%, 20=52.19%, 50=21.21% 00:10:26.995 lat (msec) : 100=7.25%, 250=1.02% 00:10:26.995 cpu : usr=2.48%, sys=3.47%, ctx=244, majf=0, minf=2 00:10:26.995 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:26.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.995 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:26.995 issued rwts: total=3072,3090,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.995 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:26.995 00:10:26.995 Run status group 0 (all jobs): 00:10:26.995 READ: bw=51.1MiB/s (53.6MB/s), 9.79MiB/s-18.9MiB/s (10.3MB/s-19.8MB/s), io=54.0MiB (56.6MB), run=1011-1056msec 00:10:26.995 WRITE: bw=53.7MiB/s (56.3MB/s), 10.4MiB/s-18.9MiB/s (10.9MB/s-19.9MB/s), io=56.7MiB (59.4MB), run=1011-1056msec 00:10:26.995 00:10:26.995 Disk stats (read/write): 00:10:26.995 nvme0n1: ios=2098/2439, merge=0/0, ticks=35719/65452, in_queue=101171, util=86.57% 00:10:26.995 nvme0n2: ios=3110/3303, merge=0/0, ticks=33427/36985, in_queue=70412, util=87.46% 00:10:26.995 nvme0n3: ios=4659/4836, merge=0/0, ticks=36366/29442, in_queue=65808, util=95.35% 00:10:26.995 nvme0n4: ios=2058/2495, merge=0/0, ticks=29338/69771, in_queue=99109, util=89.42% 00:10:26.995 13:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:26.995 13:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=783706 00:10:26.995 13:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:26.995 13:14:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:26.995 [global] 00:10:26.995 thread=1 00:10:26.995 invalidate=1 00:10:26.995 rw=read 00:10:26.995 time_based=1 00:10:26.995 runtime=10 00:10:26.995 ioengine=libaio 00:10:26.995 direct=1 00:10:26.995 bs=4096 00:10:26.995 iodepth=1 00:10:26.995 norandommap=1 00:10:26.995 numjobs=1 00:10:26.995 00:10:26.995 [job0] 00:10:26.995 filename=/dev/nvme0n1 00:10:26.995 [job1] 00:10:26.996 filename=/dev/nvme0n2 00:10:26.996 [job2] 00:10:26.996 filename=/dev/nvme0n3 00:10:26.996 [job3] 00:10:26.996 filename=/dev/nvme0n4 00:10:26.996 Could not set queue depth (nvme0n1) 00:10:26.996 Could not set queue depth (nvme0n2) 00:10:26.996 Could not set queue depth (nvme0n3) 00:10:26.996 Could not set queue depth (nvme0n4) 00:10:27.257 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.257 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.257 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.257 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:27.257 fio-3.35 00:10:27.257 Starting 4 threads 00:10:29.809 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:30.071 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:30.071 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=2727936, buflen=4096 00:10:30.071 fio: pid=783902, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.332 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.332 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:30.332 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:10:30.332 fio: pid=783901, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.332 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11845632, buflen=4096 00:10:30.332 fio: pid=783898, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.332 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.332 13:14:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:30.592 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.593 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:30.593 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1863680, buflen=4096 00:10:30.593 fio: pid=783899, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:30.593 00:10:30.593 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=783898: Thu Dec 5 13:14:53 2024 00:10:30.593 read: IOPS=992, BW=3967KiB/s (4062kB/s)(11.3MiB/2916msec) 00:10:30.593 slat (usec): min=6, max=30194, avg=50.64, stdev=699.80 00:10:30.593 clat (usec): min=356, max=1617, avg=941.89, stdev=94.03 00:10:30.593 lat (usec): min=384, max=31142, avg=992.54, stdev=706.25 00:10:30.593 clat percentiles (usec): 00:10:30.593 | 1.00th=[ 611], 5.00th=[ 758], 10.00th=[ 824], 20.00th=[ 898], 00:10:30.593 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 963], 60.00th=[ 971], 00:10:30.593 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1045], 00:10:30.593 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1434], 99.95th=[ 1467], 00:10:30.593 | 99.99th=[ 1614] 00:10:30.593 bw ( KiB/s): min= 4032, max= 4248, per=78.57%, avg=4086.40, stdev=91.81, samples=5 00:10:30.593 iops : min= 1008, max= 1062, avg=1021.60, stdev=22.95, samples=5 00:10:30.593 lat (usec) : 500=0.17%, 750=4.25%, 1000=75.18% 00:10:30.593 lat (msec) : 2=20.36% 00:10:30.593 cpu : usr=2.26%, sys=3.57%, ctx=2898, majf=0, minf=1 00:10:30.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 issued rwts: total=2893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.593 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=783899: Thu Dec 5 13:14:53 2024 00:10:30.593 read: IOPS=145, BW=579KiB/s (593kB/s)(1820KiB/3142msec) 00:10:30.593 slat (usec): min=7, max=6622, avg=40.74, stdev=309.15 00:10:30.593 clat (usec): min=757, max=42093, avg=6799.94, stdev=14126.08 00:10:30.593 lat (usec): min=783, max=47994, avg=6840.72, stdev=14166.95 00:10:30.593 clat percentiles (usec): 00:10:30.593 | 1.00th=[ 775], 5.00th=[ 914], 10.00th=[ 971], 20.00th=[ 1020], 00:10:30.593 | 30.00th=[ 1074], 40.00th=[ 1106], 50.00th=[ 1123], 60.00th=[ 1156], 00:10:30.593 | 70.00th=[ 1172], 80.00th=[ 1205], 90.00th=[41157], 95.00th=[42206], 00:10:30.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.593 | 99.99th=[42206] 00:10:30.593 bw ( KiB/s): min= 99, max= 1496, per=11.33%, avg=589.83, stdev=534.47, samples=6 00:10:30.593 iops : min= 24, max= 374, avg=147.33, stdev=133.76, samples=6 00:10:30.593 lat (usec) : 1000=14.69% 00:10:30.593 lat (msec) : 2=71.05%, 50=14.04% 00:10:30.593 cpu : usr=0.06%, sys=0.54%, ctx=459, majf=0, minf=2 00:10:30.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 issued rwts: total=456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.593 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=783901: Thu Dec 5 13:14:53 2024 00:10:30.593 read: IOPS=26, BW=105KiB/s (107kB/s)(288KiB/2745msec) 00:10:30.593 slat (usec): min=24, max=15670, avg=241.03, stdev=1830.96 00:10:30.593 clat (usec): min=504, max=42136, avg=37559.16, stdev=12155.63 00:10:30.593 lat (usec): min=533, max=56937, avg=37803.16, stdev=12359.97 00:10:30.593 clat percentiles (usec): 00:10:30.593 | 1.00th=[ 506], 5.00th=[ 848], 10.00th=[40633], 20.00th=[41157], 00:10:30.593 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:30.593 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:30.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.593 | 99.99th=[42206] 00:10:30.593 bw ( KiB/s): min= 96, max= 128, per=2.06%, avg=107.20, stdev=13.39, samples=5 00:10:30.593 iops : min= 24, max= 32, avg=26.80, stdev= 3.35, samples=5 00:10:30.593 lat (usec) : 750=2.74%, 1000=5.48% 00:10:30.593 lat (msec) : 2=1.37%, 50=89.04% 00:10:30.593 cpu : usr=0.00%, sys=0.11%, ctx=74, majf=0, minf=2 00:10:30.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.593 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=783902: Thu Dec 5 13:14:53 2024 00:10:30.593 read: IOPS=259, BW=1035KiB/s (1059kB/s)(2664KiB/2575msec) 00:10:30.593 slat (nsec): min=7603, max=62145, avg=25963.49, stdev=4144.74 00:10:30.593 clat (usec): min=803, max=45098, avg=3800.24, stdev=10064.49 00:10:30.593 lat (usec): min=830, max=45124, avg=3826.20, stdev=10064.45 00:10:30.593 clat percentiles (usec): 00:10:30.593 | 1.00th=[ 840], 5.00th=[ 955], 10.00th=[ 1020], 20.00th=[ 1074], 00:10:30.593 | 30.00th=[ 1106], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1172], 00:10:30.593 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[41157], 00:10:30.593 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:10:30.593 | 99.99th=[45351] 00:10:30.593 bw ( KiB/s): min= 416, max= 1760, per=20.27%, avg=1054.40, stdev=484.79, samples=5 00:10:30.593 iops : min= 104, max= 440, avg=263.60, stdev=121.20, samples=5 00:10:30.593 lat (usec) : 1000=8.40% 00:10:30.593 lat (msec) : 2=84.86%, 50=6.60% 00:10:30.593 cpu : usr=0.19%, sys=0.85%, ctx=667, majf=0, minf=2 00:10:30.593 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:30.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.593 issued rwts: total=667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.593 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:30.593 00:10:30.593 Run status group 0 (all jobs): 00:10:30.593 READ: bw=5201KiB/s (5325kB/s), 105KiB/s-3967KiB/s (107kB/s-4062kB/s), io=16.0MiB (16.7MB), run=2575-3142msec 00:10:30.593 00:10:30.593 Disk stats (read/write): 00:10:30.593 nvme0n1: ios=2787/0, merge=0/0, ticks=2557/0, in_queue=2557, util=90.78% 00:10:30.593 nvme0n2: ios=443/0, merge=0/0, ticks=2990/0, in_queue=2990, util=94.20% 00:10:30.593 nvme0n3: ios=67/0, merge=0/0, ticks=2498/0, in_queue=2498, util=95.54% 00:10:30.593 nvme0n4: ios=665/0, merge=0/0, ticks=2470/0, in_queue=2470, util=96.34% 00:10:30.853 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:30.853 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:31.115 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.115 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:31.115 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.115 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:31.376 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:31.376 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:31.636 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:31.636 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 783706 00:10:31.636 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:31.636 13:14:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:31.637 nvmf hotplug test: fio failed as expected 00:10:31.637 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:31.898 rmmod nvme_tcp 00:10:31.898 rmmod nvme_fabrics 00:10:31.898 rmmod nvme_keyring 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 780050 ']' 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 780050 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 780050 ']' 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 780050 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780050 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780050' 00:10:31.898 killing process with pid 780050 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 780050 00:10:31.898 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 780050 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.158 13:14:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:34.704 00:10:34.704 real 0m30.274s 00:10:34.704 user 2m37.349s 00:10:34.704 sys 0m9.954s 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.704 ************************************ 00:10:34.704 END TEST nvmf_fio_target 00:10:34.704 ************************************ 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:34.704 ************************************ 00:10:34.704 START TEST nvmf_bdevio 00:10:34.704 ************************************ 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:34.704 * Looking for test storage... 00:10:34.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:34.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:34.704 --rc genhtml_branch_coverage=1 00:10:34.704 --rc genhtml_function_coverage=1 00:10:34.704 --rc genhtml_legend=1 00:10:34.704 --rc geninfo_all_blocks=1 00:10:34.704 --rc geninfo_unexecuted_blocks=1 00:10:34.704 00:10:34.704 ' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:34.704 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:34.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:34.705 13:14:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:42.849 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:42.849 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.849 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:42.850 Found net devices under 0000:31:00.0: cvl_0_0 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:42.850 Found net devices under 0000:31:00.1: cvl_0_1 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:42.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:10:42.850 00:10:42.850 --- 10.0.0.2 ping statistics --- 00:10:42.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.850 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:10:42.850 00:10:42.850 --- 10.0.0.1 ping statistics --- 00:10:42.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.850 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:42.850 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=789726 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 789726 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 789726 ']' 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.111 13:15:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.111 [2024-12-05 13:15:05.525115] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:10:43.111 [2024-12-05 13:15:05.525204] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.111 [2024-12-05 13:15:05.633737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.372 [2024-12-05 13:15:05.684826] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.372 [2024-12-05 13:15:05.684881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.372 [2024-12-05 13:15:05.684891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.372 [2024-12-05 13:15:05.684898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.372 [2024-12-05 13:15:05.684904] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.372 [2024-12-05 13:15:05.687230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:43.372 [2024-12-05 13:15:05.687393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:43.372 [2024-12-05 13:15:05.687526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.372 [2024-12-05 13:15:05.687526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 [2024-12-05 13:15:06.396027] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 Malloc0 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:43.944 [2024-12-05 13:15:06.477175] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:43.944 { 00:10:43.944 "params": { 00:10:43.944 "name": "Nvme$subsystem", 00:10:43.944 "trtype": "$TEST_TRANSPORT", 00:10:43.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:43.944 "adrfam": "ipv4", 00:10:43.944 "trsvcid": "$NVMF_PORT", 00:10:43.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:43.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:43.944 "hdgst": ${hdgst:-false}, 00:10:43.944 "ddgst": ${ddgst:-false} 00:10:43.944 }, 00:10:43.944 "method": "bdev_nvme_attach_controller" 00:10:43.944 } 00:10:43.944 EOF 00:10:43.944 )") 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:43.944 13:15:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:43.944 "params": { 00:10:43.944 "name": "Nvme1", 00:10:43.944 "trtype": "tcp", 00:10:43.944 "traddr": "10.0.0.2", 00:10:43.944 "adrfam": "ipv4", 00:10:43.944 "trsvcid": "4420", 00:10:43.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:43.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:43.944 "hdgst": false, 00:10:43.944 "ddgst": false 00:10:43.944 }, 00:10:43.944 "method": "bdev_nvme_attach_controller" 00:10:43.944 }' 00:10:44.205 [2024-12-05 13:15:06.536830] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:10:44.205 [2024-12-05 13:15:06.536923] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid790076 ] 00:10:44.205 [2024-12-05 13:15:06.622973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:44.205 [2024-12-05 13:15:06.667333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.205 [2024-12-05 13:15:06.667450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.205 [2024-12-05 13:15:06.667453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.465 I/O targets: 00:10:44.465 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:44.465 00:10:44.465 00:10:44.465 CUnit - A unit testing framework for C - Version 2.1-3 00:10:44.465 http://cunit.sourceforge.net/ 00:10:44.465 00:10:44.465 00:10:44.465 Suite: bdevio tests on: Nvme1n1 00:10:44.465 Test: blockdev write read block ...passed 00:10:44.725 Test: blockdev write zeroes read block ...passed 00:10:44.725 Test: blockdev write zeroes read no split ...passed 00:10:44.725 Test: blockdev write zeroes read split ...passed 00:10:44.725 Test: blockdev write zeroes read split partial ...passed 00:10:44.725 Test: blockdev reset ...[2024-12-05 13:15:07.108065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:44.725 [2024-12-05 13:15:07.108124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b74b0 (9): Bad file descriptor 00:10:44.725 [2024-12-05 13:15:07.177132] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:44.725 passed 00:10:44.725 Test: blockdev write read 8 blocks ...passed 00:10:44.725 Test: blockdev write read size > 128k ...passed 00:10:44.725 Test: blockdev write read invalid size ...passed 00:10:44.985 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:44.985 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:44.985 Test: blockdev write read max offset ...passed 00:10:44.985 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:44.985 Test: blockdev writev readv 8 blocks ...passed 00:10:44.985 Test: blockdev writev readv 30 x 1block ...passed 00:10:44.985 Test: blockdev writev readv block ...passed 00:10:44.985 Test: blockdev writev readv size > 128k ...passed 00:10:44.985 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:44.985 Test: blockdev comparev and writev ...[2024-12-05 13:15:07.438585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.438612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.438623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.438630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.438898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.438907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.438917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.438922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.439135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.439143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.439153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.439158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.439383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.439391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.439400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:44.985 [2024-12-05 13:15:07.439406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:44.985 passed 00:10:44.985 Test: blockdev nvme passthru rw ...passed 00:10:44.985 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:15:07.523307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.985 [2024-12-05 13:15:07.523317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.523508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.985 [2024-12-05 13:15:07.523515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:44.985 [2024-12-05 13:15:07.523603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.985 [2024-12-05 13:15:07.523609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:44.986 [2024-12-05 13:15:07.523710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:44.986 [2024-12-05 13:15:07.523717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:44.986 passed 00:10:44.986 Test: blockdev nvme admin passthru ...passed 00:10:45.246 Test: blockdev copy ...passed 00:10:45.246 00:10:45.246 Run Summary: Type Total Ran Passed Failed Inactive 00:10:45.246 suites 1 1 n/a 0 0 00:10:45.246 tests 23 23 23 0 0 00:10:45.246 asserts 152 152 152 0 n/a 00:10:45.246 00:10:45.246 Elapsed time = 1.216 seconds 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.246 rmmod nvme_tcp 00:10:45.246 rmmod nvme_fabrics 00:10:45.246 rmmod nvme_keyring 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 789726 ']' 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 789726 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 789726 ']' 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 789726 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.246 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 789726 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 789726' 00:10:45.507 killing process with pid 789726 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 789726 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 789726 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.507 13:15:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.507 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.507 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.507 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.507 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.507 13:15:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.084 00:10:48.084 real 0m13.352s 00:10:48.084 user 0m14.151s 00:10:48.084 sys 0m6.977s 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.084 ************************************ 00:10:48.084 END TEST nvmf_bdevio 00:10:48.084 ************************************ 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:48.084 00:10:48.084 real 5m17.080s 00:10:48.084 user 11m57.389s 00:10:48.084 sys 1m59.321s 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.084 ************************************ 00:10:48.084 END TEST nvmf_target_core 00:10:48.084 ************************************ 00:10:48.084 13:15:10 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:48.084 13:15:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.084 13:15:10 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.084 13:15:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.084 ************************************ 00:10:48.084 START TEST nvmf_target_extra 00:10:48.084 ************************************ 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:48.084 * Looking for test storage... 00:10:48.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.084 --rc genhtml_branch_coverage=1 00:10:48.084 --rc genhtml_function_coverage=1 00:10:48.084 --rc genhtml_legend=1 00:10:48.084 --rc geninfo_all_blocks=1 00:10:48.084 --rc geninfo_unexecuted_blocks=1 00:10:48.084 00:10:48.084 ' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.084 --rc genhtml_branch_coverage=1 00:10:48.084 --rc genhtml_function_coverage=1 00:10:48.084 --rc genhtml_legend=1 00:10:48.084 --rc geninfo_all_blocks=1 00:10:48.084 --rc geninfo_unexecuted_blocks=1 00:10:48.084 00:10:48.084 ' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.084 --rc genhtml_branch_coverage=1 00:10:48.084 --rc genhtml_function_coverage=1 00:10:48.084 --rc genhtml_legend=1 00:10:48.084 --rc geninfo_all_blocks=1 00:10:48.084 --rc geninfo_unexecuted_blocks=1 00:10:48.084 00:10:48.084 ' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.084 --rc genhtml_branch_coverage=1 00:10:48.084 --rc genhtml_function_coverage=1 00:10:48.084 --rc genhtml_legend=1 00:10:48.084 --rc geninfo_all_blocks=1 00:10:48.084 --rc geninfo_unexecuted_blocks=1 00:10:48.084 00:10:48.084 ' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.084 13:15:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.085 ************************************ 00:10:48.085 START TEST nvmf_example 00:10:48.085 ************************************ 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:48.085 * Looking for test storage... 00:10:48.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:48.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.085 --rc genhtml_branch_coverage=1 00:10:48.085 --rc genhtml_function_coverage=1 00:10:48.085 --rc genhtml_legend=1 00:10:48.085 --rc geninfo_all_blocks=1 00:10:48.085 --rc geninfo_unexecuted_blocks=1 00:10:48.085 00:10:48.085 ' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:48.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.085 --rc genhtml_branch_coverage=1 00:10:48.085 --rc genhtml_function_coverage=1 00:10:48.085 --rc genhtml_legend=1 00:10:48.085 --rc geninfo_all_blocks=1 00:10:48.085 --rc geninfo_unexecuted_blocks=1 00:10:48.085 00:10:48.085 ' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:48.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.085 --rc genhtml_branch_coverage=1 00:10:48.085 --rc genhtml_function_coverage=1 00:10:48.085 --rc genhtml_legend=1 00:10:48.085 --rc geninfo_all_blocks=1 00:10:48.085 --rc geninfo_unexecuted_blocks=1 00:10:48.085 00:10:48.085 ' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:48.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.085 --rc genhtml_branch_coverage=1 00:10:48.085 --rc genhtml_function_coverage=1 00:10:48.085 --rc genhtml_legend=1 00:10:48.085 --rc geninfo_all_blocks=1 00:10:48.085 --rc geninfo_unexecuted_blocks=1 00:10:48.085 00:10:48.085 ' 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.085 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:48.346 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:48.347 13:15:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:56.500 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:56.500 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:56.500 Found net devices under 0000:31:00.0: cvl_0_0 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:56.500 Found net devices under 0000:31:00.1: cvl_0_1 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:56.500 13:15:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:56.500 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:56.500 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:56.500 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:56.500 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:56.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:56.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:10:56.500 00:10:56.500 --- 10.0.0.2 ping statistics --- 00:10:56.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.500 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:10:56.500 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:56.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:56.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:10:56.501 00:10:56.501 --- 10.0.0.1 ping statistics --- 00:10:56.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:56.501 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:56.501 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=795607 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 795607 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 795607 ']' 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.762 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.836 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:57.836 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:57.836 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.836 13:15:19 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:57.836 13:15:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:07.906 Initializing NVMe Controllers 00:11:07.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:07.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:07.906 Initialization complete. Launching workers. 00:11:07.906 ======================================================== 00:11:07.906 Latency(us) 00:11:07.906 Device Information : IOPS MiB/s Average min max 00:11:07.906 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17641.47 68.91 3627.40 709.95 16400.34 00:11:07.906 ======================================================== 00:11:07.906 Total : 17641.47 68.91 3627.40 709.95 16400.34 00:11:07.906 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.906 rmmod nvme_tcp 00:11:07.906 rmmod nvme_fabrics 00:11:07.906 rmmod nvme_keyring 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 795607 ']' 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 795607 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 795607 ']' 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 795607 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:07.906 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795607 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795607' 00:11:08.168 killing process with pid 795607 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 795607 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 795607 00:11:08.168 nvmf threads initialize successfully 00:11:08.168 bdev subsystem init successfully 00:11:08.168 created a nvmf target service 00:11:08.168 create targets's poll groups done 00:11:08.168 all subsystems of target started 00:11:08.168 nvmf target is running 00:11:08.168 all subsystems of target stopped 00:11:08.168 destroy targets's poll groups done 00:11:08.168 destroyed the nvmf target service 00:11:08.168 bdev subsystem finish successfully 00:11:08.168 nvmf threads destroy successfully 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.168 13:15:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.720 00:11:10.720 real 0m22.278s 00:11:10.720 user 0m46.843s 00:11:10.720 sys 0m7.607s 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.720 ************************************ 00:11:10.720 END TEST nvmf_example 00:11:10.720 ************************************ 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.720 ************************************ 00:11:10.720 START TEST nvmf_filesystem 00:11:10.720 ************************************ 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:10.720 * Looking for test storage... 00:11:10.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.720 13:15:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.720 --rc genhtml_branch_coverage=1 00:11:10.720 --rc genhtml_function_coverage=1 00:11:10.720 --rc genhtml_legend=1 00:11:10.720 --rc geninfo_all_blocks=1 00:11:10.720 --rc geninfo_unexecuted_blocks=1 00:11:10.720 00:11:10.720 ' 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.720 --rc genhtml_branch_coverage=1 00:11:10.720 --rc genhtml_function_coverage=1 00:11:10.720 --rc genhtml_legend=1 00:11:10.720 --rc geninfo_all_blocks=1 00:11:10.720 --rc geninfo_unexecuted_blocks=1 00:11:10.720 00:11:10.720 ' 00:11:10.720 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.720 --rc genhtml_branch_coverage=1 00:11:10.720 --rc genhtml_function_coverage=1 00:11:10.721 --rc genhtml_legend=1 00:11:10.721 --rc geninfo_all_blocks=1 00:11:10.721 --rc geninfo_unexecuted_blocks=1 00:11:10.721 00:11:10.721 ' 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.721 --rc genhtml_branch_coverage=1 00:11:10.721 --rc genhtml_function_coverage=1 00:11:10.721 --rc genhtml_legend=1 00:11:10.721 --rc geninfo_all_blocks=1 00:11:10.721 --rc geninfo_unexecuted_blocks=1 00:11:10.721 00:11:10.721 ' 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:10.721 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:10.722 #define SPDK_CONFIG_H 00:11:10.722 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:10.722 #define SPDK_CONFIG_APPS 1 00:11:10.722 #define SPDK_CONFIG_ARCH native 00:11:10.722 #undef SPDK_CONFIG_ASAN 00:11:10.722 #undef SPDK_CONFIG_AVAHI 00:11:10.722 #undef SPDK_CONFIG_CET 00:11:10.722 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:10.722 #define SPDK_CONFIG_COVERAGE 1 00:11:10.722 #define SPDK_CONFIG_CROSS_PREFIX 00:11:10.722 #undef SPDK_CONFIG_CRYPTO 00:11:10.722 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:10.722 #undef SPDK_CONFIG_CUSTOMOCF 00:11:10.722 #undef SPDK_CONFIG_DAOS 00:11:10.722 #define SPDK_CONFIG_DAOS_DIR 00:11:10.722 #define SPDK_CONFIG_DEBUG 1 00:11:10.722 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:10.722 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:10.722 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:10.722 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:10.722 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:10.722 #undef SPDK_CONFIG_DPDK_UADK 00:11:10.722 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:10.722 #define SPDK_CONFIG_EXAMPLES 1 00:11:10.722 #undef SPDK_CONFIG_FC 00:11:10.722 #define SPDK_CONFIG_FC_PATH 00:11:10.722 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:10.722 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:10.722 #define SPDK_CONFIG_FSDEV 1 00:11:10.722 #undef SPDK_CONFIG_FUSE 00:11:10.722 #undef SPDK_CONFIG_FUZZER 00:11:10.722 #define SPDK_CONFIG_FUZZER_LIB 00:11:10.722 #undef SPDK_CONFIG_GOLANG 00:11:10.722 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:10.722 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:10.722 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:10.722 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:10.722 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:10.722 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:10.722 #undef SPDK_CONFIG_HAVE_LZ4 00:11:10.722 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:10.722 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:10.722 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:10.722 #define SPDK_CONFIG_IDXD 1 00:11:10.722 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:10.722 #undef SPDK_CONFIG_IPSEC_MB 00:11:10.722 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:10.722 #define SPDK_CONFIG_ISAL 1 00:11:10.722 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:10.722 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:10.722 #define SPDK_CONFIG_LIBDIR 00:11:10.722 #undef SPDK_CONFIG_LTO 00:11:10.722 #define SPDK_CONFIG_MAX_LCORES 128 00:11:10.722 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:10.722 #define SPDK_CONFIG_NVME_CUSE 1 00:11:10.722 #undef SPDK_CONFIG_OCF 00:11:10.722 #define SPDK_CONFIG_OCF_PATH 00:11:10.722 #define SPDK_CONFIG_OPENSSL_PATH 00:11:10.722 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:10.722 #define SPDK_CONFIG_PGO_DIR 00:11:10.722 #undef SPDK_CONFIG_PGO_USE 00:11:10.722 #define SPDK_CONFIG_PREFIX /usr/local 00:11:10.722 #undef SPDK_CONFIG_RAID5F 00:11:10.722 #undef SPDK_CONFIG_RBD 00:11:10.722 #define SPDK_CONFIG_RDMA 1 00:11:10.722 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:10.722 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:10.722 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:10.722 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:10.722 #define SPDK_CONFIG_SHARED 1 00:11:10.722 #undef SPDK_CONFIG_SMA 00:11:10.722 #define SPDK_CONFIG_TESTS 1 00:11:10.722 #undef SPDK_CONFIG_TSAN 00:11:10.722 #define SPDK_CONFIG_UBLK 1 00:11:10.722 #define SPDK_CONFIG_UBSAN 1 00:11:10.722 #undef SPDK_CONFIG_UNIT_TESTS 00:11:10.722 #undef SPDK_CONFIG_URING 00:11:10.722 #define SPDK_CONFIG_URING_PATH 00:11:10.722 #undef SPDK_CONFIG_URING_ZNS 00:11:10.722 #undef SPDK_CONFIG_USDT 00:11:10.722 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:10.722 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:10.722 #define SPDK_CONFIG_VFIO_USER 1 00:11:10.722 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:10.722 #define SPDK_CONFIG_VHOST 1 00:11:10.722 #define SPDK_CONFIG_VIRTIO 1 00:11:10.722 #undef SPDK_CONFIG_VTUNE 00:11:10.722 #define SPDK_CONFIG_VTUNE_DIR 00:11:10.722 #define SPDK_CONFIG_WERROR 1 00:11:10.722 #define SPDK_CONFIG_WPDK_DIR 00:11:10.722 #undef SPDK_CONFIG_XNVME 00:11:10.722 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:10.722 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:10.723 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:10.724 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 798406 ]] 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 798406 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.EcLKBA 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EcLKBA/tests/target /tmp/spdk.EcLKBA 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122211819520 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356550144 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7144730624 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64666906624 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678273024 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847689216 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871310848 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23621632 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=175104 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=328704 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677564416 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678277120 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=712704 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:10.725 * Looking for test storage... 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122211819520 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:10.725 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9359323136 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:10.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.726 --rc genhtml_branch_coverage=1 00:11:10.726 --rc genhtml_function_coverage=1 00:11:10.726 --rc genhtml_legend=1 00:11:10.726 --rc geninfo_all_blocks=1 00:11:10.726 --rc geninfo_unexecuted_blocks=1 00:11:10.726 00:11:10.726 ' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:10.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.726 --rc genhtml_branch_coverage=1 00:11:10.726 --rc genhtml_function_coverage=1 00:11:10.726 --rc genhtml_legend=1 00:11:10.726 --rc geninfo_all_blocks=1 00:11:10.726 --rc geninfo_unexecuted_blocks=1 00:11:10.726 00:11:10.726 ' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:10.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.726 --rc genhtml_branch_coverage=1 00:11:10.726 --rc genhtml_function_coverage=1 00:11:10.726 --rc genhtml_legend=1 00:11:10.726 --rc geninfo_all_blocks=1 00:11:10.726 --rc geninfo_unexecuted_blocks=1 00:11:10.726 00:11:10.726 ' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:10.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.726 --rc genhtml_branch_coverage=1 00:11:10.726 --rc genhtml_function_coverage=1 00:11:10.726 --rc genhtml_legend=1 00:11:10.726 --rc geninfo_all_blocks=1 00:11:10.726 --rc geninfo_unexecuted_blocks=1 00:11:10.726 00:11:10.726 ' 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.726 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:10.989 13:15:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.154 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:19.155 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:19.155 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:19.155 Found net devices under 0000:31:00.0: cvl_0_0 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:19.155 Found net devices under 0000:31:00.1: cvl_0_1 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.155 13:15:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:19.155 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:19.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:11:19.155 00:11:19.156 --- 10.0.0.2 ping statistics --- 00:11:19.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.156 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:11:19.156 00:11:19.156 --- 10.0.0.1 ping statistics --- 00:11:19.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.156 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:19.156 ************************************ 00:11:19.156 START TEST nvmf_filesystem_no_in_capsule 00:11:19.156 ************************************ 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=802717 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 802717 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 802717 ']' 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:19.156 13:15:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.156 [2024-12-05 13:15:41.469298] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:11:19.156 [2024-12-05 13:15:41.469360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.156 [2024-12-05 13:15:41.563440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.156 [2024-12-05 13:15:41.604662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.156 [2024-12-05 13:15:41.604701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.156 [2024-12-05 13:15:41.604709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.156 [2024-12-05 13:15:41.604716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.156 [2024-12-05 13:15:41.604722] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.156 [2024-12-05 13:15:41.606360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.156 [2024-12-05 13:15:41.606477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.156 [2024-12-05 13:15:41.606633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.156 [2024-12-05 13:15:41.606633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.729 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:19.729 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:19.729 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:19.729 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:19.729 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.990 [2024-12-05 13:15:42.326724] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.990 Malloc1 00:11:19.990 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.991 [2024-12-05 13:15:42.464967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:19.991 { 00:11:19.991 "name": "Malloc1", 00:11:19.991 "aliases": [ 00:11:19.991 "2144caf4-759d-48ea-8d52-c782259b36ce" 00:11:19.991 ], 00:11:19.991 "product_name": "Malloc disk", 00:11:19.991 "block_size": 512, 00:11:19.991 "num_blocks": 1048576, 00:11:19.991 "uuid": "2144caf4-759d-48ea-8d52-c782259b36ce", 00:11:19.991 "assigned_rate_limits": { 00:11:19.991 "rw_ios_per_sec": 0, 00:11:19.991 "rw_mbytes_per_sec": 0, 00:11:19.991 "r_mbytes_per_sec": 0, 00:11:19.991 "w_mbytes_per_sec": 0 00:11:19.991 }, 00:11:19.991 "claimed": true, 00:11:19.991 "claim_type": "exclusive_write", 00:11:19.991 "zoned": false, 00:11:19.991 "supported_io_types": { 00:11:19.991 "read": true, 00:11:19.991 "write": true, 00:11:19.991 "unmap": true, 00:11:19.991 "flush": true, 00:11:19.991 "reset": true, 00:11:19.991 "nvme_admin": false, 00:11:19.991 "nvme_io": false, 00:11:19.991 "nvme_io_md": false, 00:11:19.991 "write_zeroes": true, 00:11:19.991 "zcopy": true, 00:11:19.991 "get_zone_info": false, 00:11:19.991 "zone_management": false, 00:11:19.991 "zone_append": false, 00:11:19.991 "compare": false, 00:11:19.991 "compare_and_write": false, 00:11:19.991 "abort": true, 00:11:19.991 "seek_hole": false, 00:11:19.991 "seek_data": false, 00:11:19.991 "copy": true, 00:11:19.991 "nvme_iov_md": false 00:11:19.991 }, 00:11:19.991 "memory_domains": [ 00:11:19.991 { 00:11:19.991 "dma_device_id": "system", 00:11:19.991 "dma_device_type": 1 00:11:19.991 }, 00:11:19.991 { 00:11:19.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.991 "dma_device_type": 2 00:11:19.991 } 00:11:19.991 ], 00:11:19.991 "driver_specific": {} 00:11:19.991 } 00:11:19.991 ]' 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:19.991 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:20.251 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:20.251 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:20.251 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:20.251 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:20.251 13:15:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.636 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.636 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:21.636 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.636 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:21.636 13:15:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:24.178 13:15:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:24.748 13:15:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:25.691 ************************************ 00:11:25.691 START TEST filesystem_ext4 00:11:25.691 ************************************ 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:25.691 13:15:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:25.691 mke2fs 1.47.0 (5-Feb-2023) 00:11:25.951 Discarding device blocks: 0/522240 done 00:11:25.951 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:25.951 Filesystem UUID: ee5fd4ea-6ee0-499d-a5c5-9886dbfba045 00:11:25.951 Superblock backups stored on blocks: 00:11:25.951 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:25.951 00:11:25.951 Allocating group tables: 0/64 done 00:11:25.951 Writing inode tables: 0/64 done 00:11:28.495 Creating journal (8192 blocks): done 00:11:28.495 Writing superblocks and filesystem accounting information: 0/64 done 00:11:28.495 00:11:28.495 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:28.495 13:15:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 802717 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.086 00:11:35.086 real 0m8.471s 00:11:35.086 user 0m0.036s 00:11:35.086 sys 0m0.073s 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:35.086 ************************************ 00:11:35.086 END TEST filesystem_ext4 00:11:35.086 ************************************ 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.086 ************************************ 00:11:35.086 START TEST filesystem_btrfs 00:11:35.086 ************************************ 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:35.086 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:35.087 btrfs-progs v6.8.1 00:11:35.087 See https://btrfs.readthedocs.io for more information. 00:11:35.087 00:11:35.087 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:35.087 NOTE: several default settings have changed in version 5.15, please make sure 00:11:35.087 this does not affect your deployments: 00:11:35.087 - DUP for metadata (-m dup) 00:11:35.087 - enabled no-holes (-O no-holes) 00:11:35.087 - enabled free-space-tree (-R free-space-tree) 00:11:35.087 00:11:35.087 Label: (null) 00:11:35.087 UUID: ba4a931e-e428-49e2-bfff-b7b6b74bdbed 00:11:35.087 Node size: 16384 00:11:35.087 Sector size: 4096 (CPU page size: 4096) 00:11:35.087 Filesystem size: 510.00MiB 00:11:35.087 Block group profiles: 00:11:35.087 Data: single 8.00MiB 00:11:35.087 Metadata: DUP 32.00MiB 00:11:35.087 System: DUP 8.00MiB 00:11:35.087 SSD detected: yes 00:11:35.087 Zoned device: no 00:11:35.087 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:35.087 Checksum: crc32c 00:11:35.087 Number of devices: 1 00:11:35.087 Devices: 00:11:35.087 ID SIZE PATH 00:11:35.087 1 510.00MiB /dev/nvme0n1p1 00:11:35.087 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:35.087 13:15:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 802717 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:35.087 00:11:35.087 real 0m0.875s 00:11:35.087 user 0m0.034s 00:11:35.087 sys 0m0.119s 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.087 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:35.087 ************************************ 00:11:35.087 END TEST filesystem_btrfs 00:11:35.087 ************************************ 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.348 ************************************ 00:11:35.348 START TEST filesystem_xfs 00:11:35.348 ************************************ 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:35.348 13:15:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:35.348 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:35.348 = sectsz=512 attr=2, projid32bit=1 00:11:35.348 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:35.348 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:35.348 data = bsize=4096 blocks=130560, imaxpct=25 00:11:35.348 = sunit=0 swidth=0 blks 00:11:35.348 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:35.348 log =internal log bsize=4096 blocks=16384, version=2 00:11:35.348 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:35.348 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:36.290 Discarding blocks...Done. 00:11:36.290 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:36.290 13:15:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 802717 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.836 00:11:38.836 real 0m3.350s 00:11:38.836 user 0m0.026s 00:11:38.836 sys 0m0.081s 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:38.836 ************************************ 00:11:38.836 END TEST filesystem_xfs 00:11:38.836 ************************************ 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:38.836 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:39.097 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 802717 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 802717 ']' 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 802717 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 802717 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 802717' 00:11:39.358 killing process with pid 802717 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 802717 00:11:39.358 13:16:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 802717 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:39.619 00:11:39.619 real 0m20.598s 00:11:39.619 user 1m21.455s 00:11:39.619 sys 0m1.469s 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.619 ************************************ 00:11:39.619 END TEST nvmf_filesystem_no_in_capsule 00:11:39.619 ************************************ 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:39.619 ************************************ 00:11:39.619 START TEST nvmf_filesystem_in_capsule 00:11:39.619 ************************************ 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=806976 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 806976 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 806976 ']' 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:39.619 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.619 [2024-12-05 13:16:02.143400] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:11:39.619 [2024-12-05 13:16:02.143446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.880 [2024-12-05 13:16:02.227985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.880 [2024-12-05 13:16:02.263827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.880 [2024-12-05 13:16:02.263859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.880 [2024-12-05 13:16:02.263871] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.880 [2024-12-05 13:16:02.263878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.880 [2024-12-05 13:16:02.263884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.880 [2024-12-05 13:16:02.265425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.880 [2024-12-05 13:16:02.265542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.880 [2024-12-05 13:16:02.265695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:39.880 [2024-12-05 13:16:02.265696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.451 [2024-12-05 13:16:02.992505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.451 13:16:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.712 Malloc1 00:11:40.712 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.713 [2024-12-05 13:16:03.129837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:40.713 { 00:11:40.713 "name": "Malloc1", 00:11:40.713 "aliases": [ 00:11:40.713 "2fc7fcff-de8a-471a-b6af-a2972eabe000" 00:11:40.713 ], 00:11:40.713 "product_name": "Malloc disk", 00:11:40.713 "block_size": 512, 00:11:40.713 "num_blocks": 1048576, 00:11:40.713 "uuid": "2fc7fcff-de8a-471a-b6af-a2972eabe000", 00:11:40.713 "assigned_rate_limits": { 00:11:40.713 "rw_ios_per_sec": 0, 00:11:40.713 "rw_mbytes_per_sec": 0, 00:11:40.713 "r_mbytes_per_sec": 0, 00:11:40.713 "w_mbytes_per_sec": 0 00:11:40.713 }, 00:11:40.713 "claimed": true, 00:11:40.713 "claim_type": "exclusive_write", 00:11:40.713 "zoned": false, 00:11:40.713 "supported_io_types": { 00:11:40.713 "read": true, 00:11:40.713 "write": true, 00:11:40.713 "unmap": true, 00:11:40.713 "flush": true, 00:11:40.713 "reset": true, 00:11:40.713 "nvme_admin": false, 00:11:40.713 "nvme_io": false, 00:11:40.713 "nvme_io_md": false, 00:11:40.713 "write_zeroes": true, 00:11:40.713 "zcopy": true, 00:11:40.713 "get_zone_info": false, 00:11:40.713 "zone_management": false, 00:11:40.713 "zone_append": false, 00:11:40.713 "compare": false, 00:11:40.713 "compare_and_write": false, 00:11:40.713 "abort": true, 00:11:40.713 "seek_hole": false, 00:11:40.713 "seek_data": false, 00:11:40.713 "copy": true, 00:11:40.713 "nvme_iov_md": false 00:11:40.713 }, 00:11:40.713 "memory_domains": [ 00:11:40.713 { 00:11:40.713 "dma_device_id": "system", 00:11:40.713 "dma_device_type": 1 00:11:40.713 }, 00:11:40.713 { 00:11:40.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.713 "dma_device_type": 2 00:11:40.713 } 00:11:40.713 ], 00:11:40.713 "driver_specific": {} 00:11:40.713 } 00:11:40.713 ]' 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:40.713 13:16:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.635 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.635 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:42.635 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.635 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:42.635 13:16:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:44.551 13:16:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:44.551 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:45.489 13:16:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.433 ************************************ 00:11:46.433 START TEST filesystem_in_capsule_ext4 00:11:46.433 ************************************ 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:46.433 13:16:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:46.433 mke2fs 1.47.0 (5-Feb-2023) 00:11:46.433 Discarding device blocks: 0/522240 done 00:11:46.433 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:46.433 Filesystem UUID: 3dc664fa-e158-450b-9a22-e5f740134e6f 00:11:46.434 Superblock backups stored on blocks: 00:11:46.434 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:46.434 00:11:46.434 Allocating group tables: 0/64 done 00:11:46.434 Writing inode tables: 0/64 done 00:11:46.694 Creating journal (8192 blocks): done 00:11:48.652 Writing superblocks and filesystem accounting information: 0/64 done 00:11:48.652 00:11:48.652 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:48.652 13:16:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 806976 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.240 00:11:55.240 real 0m8.444s 00:11:55.240 user 0m0.024s 00:11:55.240 sys 0m0.085s 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:55.240 ************************************ 00:11:55.240 END TEST filesystem_in_capsule_ext4 00:11:55.240 ************************************ 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.240 ************************************ 00:11:55.240 START TEST filesystem_in_capsule_btrfs 00:11:55.240 ************************************ 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:55.240 btrfs-progs v6.8.1 00:11:55.240 See https://btrfs.readthedocs.io for more information. 00:11:55.240 00:11:55.240 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:55.240 NOTE: several default settings have changed in version 5.15, please make sure 00:11:55.240 this does not affect your deployments: 00:11:55.240 - DUP for metadata (-m dup) 00:11:55.240 - enabled no-holes (-O no-holes) 00:11:55.240 - enabled free-space-tree (-R free-space-tree) 00:11:55.240 00:11:55.240 Label: (null) 00:11:55.240 UUID: 63913bc3-33b3-4726-af0a-798edd995edf 00:11:55.240 Node size: 16384 00:11:55.240 Sector size: 4096 (CPU page size: 4096) 00:11:55.240 Filesystem size: 510.00MiB 00:11:55.240 Block group profiles: 00:11:55.240 Data: single 8.00MiB 00:11:55.240 Metadata: DUP 32.00MiB 00:11:55.240 System: DUP 8.00MiB 00:11:55.240 SSD detected: yes 00:11:55.240 Zoned device: no 00:11:55.240 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:55.240 Checksum: crc32c 00:11:55.240 Number of devices: 1 00:11:55.240 Devices: 00:11:55.240 ID SIZE PATH 00:11:55.240 1 510.00MiB /dev/nvme0n1p1 00:11:55.240 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:55.240 13:16:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 806976 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.183 00:11:56.183 real 0m1.238s 00:11:56.183 user 0m0.027s 00:11:56.183 sys 0m0.124s 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.183 ************************************ 00:11:56.183 END TEST filesystem_in_capsule_btrfs 00:11:56.183 ************************************ 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.183 ************************************ 00:11:56.183 START TEST filesystem_in_capsule_xfs 00:11:56.183 ************************************ 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:56.183 13:16:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:56.183 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:56.184 = sectsz=512 attr=2, projid32bit=1 00:11:56.184 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:56.184 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:56.184 data = bsize=4096 blocks=130560, imaxpct=25 00:11:56.184 = sunit=0 swidth=0 blks 00:11:56.184 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:56.184 log =internal log bsize=4096 blocks=16384, version=2 00:11:56.184 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:56.184 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:57.570 Discarding blocks...Done. 00:11:57.571 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:57.571 13:16:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 806976 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.114 00:12:00.114 real 0m3.662s 00:12:00.114 user 0m0.028s 00:12:00.114 sys 0m0.081s 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:00.114 ************************************ 00:12:00.114 END TEST filesystem_in_capsule_xfs 00:12:00.114 ************************************ 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 806976 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 806976 ']' 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 806976 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.114 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 806976 00:12:00.115 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.115 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.115 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 806976' 00:12:00.115 killing process with pid 806976 00:12:00.115 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 806976 00:12:00.115 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 806976 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:00.374 00:12:00.374 real 0m20.721s 00:12:00.374 user 1m21.913s 00:12:00.374 sys 0m1.544s 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.374 ************************************ 00:12:00.374 END TEST nvmf_filesystem_in_capsule 00:12:00.374 ************************************ 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.374 rmmod nvme_tcp 00:12:00.374 rmmod nvme_fabrics 00:12:00.374 rmmod nvme_keyring 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.374 13:16:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.919 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:02.919 00:12:02.919 real 0m52.196s 00:12:02.919 user 2m45.776s 00:12:02.919 sys 0m9.396s 00:12:02.919 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:02.919 13:16:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.919 ************************************ 00:12:02.919 END TEST nvmf_filesystem 00:12:02.919 ************************************ 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.919 ************************************ 00:12:02.919 START TEST nvmf_target_discovery 00:12:02.919 ************************************ 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.919 * Looking for test storage... 00:12:02.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.919 --rc genhtml_branch_coverage=1 00:12:02.919 --rc genhtml_function_coverage=1 00:12:02.919 --rc genhtml_legend=1 00:12:02.919 --rc geninfo_all_blocks=1 00:12:02.919 --rc geninfo_unexecuted_blocks=1 00:12:02.919 00:12:02.919 ' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.919 --rc genhtml_branch_coverage=1 00:12:02.919 --rc genhtml_function_coverage=1 00:12:02.919 --rc genhtml_legend=1 00:12:02.919 --rc geninfo_all_blocks=1 00:12:02.919 --rc geninfo_unexecuted_blocks=1 00:12:02.919 00:12:02.919 ' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.919 --rc genhtml_branch_coverage=1 00:12:02.919 --rc genhtml_function_coverage=1 00:12:02.919 --rc genhtml_legend=1 00:12:02.919 --rc geninfo_all_blocks=1 00:12:02.919 --rc geninfo_unexecuted_blocks=1 00:12:02.919 00:12:02.919 ' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.919 --rc genhtml_branch_coverage=1 00:12:02.919 --rc genhtml_function_coverage=1 00:12:02.919 --rc genhtml_legend=1 00:12:02.919 --rc geninfo_all_blocks=1 00:12:02.919 --rc geninfo_unexecuted_blocks=1 00:12:02.919 00:12:02.919 ' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.919 13:16:25 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:11.059 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:11.059 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:11.059 Found net devices under 0000:31:00.0: cvl_0_0 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:11.059 Found net devices under 0000:31:00.1: cvl_0_1 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.059 13:16:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.059 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:11.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.573 ms 00:12:11.060 00:12:11.060 --- 10.0.0.2 ping statistics --- 00:12:11.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.060 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:12:11.060 00:12:11.060 --- 10.0.0.1 ping statistics --- 00:12:11.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.060 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=815897 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 815897 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 815897 ']' 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.060 13:16:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:11.060 [2024-12-05 13:16:33.445718] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:12:11.060 [2024-12-05 13:16:33.445784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.060 [2024-12-05 13:16:33.540014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:11.060 [2024-12-05 13:16:33.581297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.060 [2024-12-05 13:16:33.581338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.060 [2024-12-05 13:16:33.581346] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.060 [2024-12-05 13:16:33.581354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.060 [2024-12-05 13:16:33.581359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.060 [2024-12-05 13:16:33.583247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:11.060 [2024-12-05 13:16:33.583367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.060 [2024-12-05 13:16:33.583529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.060 [2024-12-05 13:16:33.583530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.007 [2024-12-05 13:16:34.303576] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.007 Null1 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.007 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 [2024-12-05 13:16:34.363949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 Null2 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 Null3 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 Null4 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.008 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:12.270 00:12:12.270 Discovery Log Number of Records 6, Generation counter 6 00:12:12.270 =====Discovery Log Entry 0====== 00:12:12.270 trtype: tcp 00:12:12.270 adrfam: ipv4 00:12:12.270 subtype: current discovery subsystem 00:12:12.270 treq: not required 00:12:12.270 portid: 0 00:12:12.270 trsvcid: 4420 00:12:12.270 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:12.270 traddr: 10.0.0.2 00:12:12.270 eflags: explicit discovery connections, duplicate discovery information 00:12:12.270 sectype: none 00:12:12.270 =====Discovery Log Entry 1====== 00:12:12.270 trtype: tcp 00:12:12.270 adrfam: ipv4 00:12:12.270 subtype: nvme subsystem 00:12:12.270 treq: not required 00:12:12.270 portid: 0 00:12:12.270 trsvcid: 4420 00:12:12.270 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:12.270 traddr: 10.0.0.2 00:12:12.270 eflags: none 00:12:12.270 sectype: none 00:12:12.270 =====Discovery Log Entry 2====== 00:12:12.270 trtype: tcp 00:12:12.270 adrfam: ipv4 00:12:12.270 subtype: nvme subsystem 00:12:12.270 treq: not required 00:12:12.270 portid: 0 00:12:12.270 trsvcid: 4420 00:12:12.270 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:12.270 traddr: 10.0.0.2 00:12:12.270 eflags: none 00:12:12.270 sectype: none 00:12:12.270 =====Discovery Log Entry 3====== 00:12:12.270 trtype: tcp 00:12:12.270 adrfam: ipv4 00:12:12.270 subtype: nvme subsystem 00:12:12.270 treq: not required 00:12:12.270 portid: 0 00:12:12.270 trsvcid: 4420 00:12:12.270 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:12.270 traddr: 10.0.0.2 00:12:12.270 eflags: none 00:12:12.270 sectype: none 00:12:12.270 =====Discovery Log Entry 4====== 00:12:12.270 trtype: tcp 00:12:12.270 adrfam: ipv4 00:12:12.270 subtype: nvme subsystem 00:12:12.270 treq: not required 00:12:12.270 portid: 0 00:12:12.270 trsvcid: 4420 00:12:12.270 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:12.270 traddr: 10.0.0.2 00:12:12.270 eflags: none 00:12:12.270 sectype: none 00:12:12.270 =====Discovery Log Entry 5====== 00:12:12.270 trtype: tcp 00:12:12.270 adrfam: ipv4 00:12:12.270 subtype: discovery subsystem referral 00:12:12.270 treq: not required 00:12:12.270 portid: 0 00:12:12.270 trsvcid: 4430 00:12:12.270 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:12.270 traddr: 10.0.0.2 00:12:12.270 eflags: none 00:12:12.270 sectype: none 00:12:12.270 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:12.270 Perform nvmf subsystem discovery via RPC 00:12:12.270 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:12.270 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.270 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.270 [ 00:12:12.270 { 00:12:12.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:12.270 "subtype": "Discovery", 00:12:12.270 "listen_addresses": [ 00:12:12.270 { 00:12:12.270 "trtype": "TCP", 00:12:12.270 "adrfam": "IPv4", 00:12:12.270 "traddr": "10.0.0.2", 00:12:12.270 "trsvcid": "4420" 00:12:12.270 } 00:12:12.270 ], 00:12:12.270 "allow_any_host": true, 00:12:12.270 "hosts": [] 00:12:12.270 }, 00:12:12.270 { 00:12:12.270 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:12.270 "subtype": "NVMe", 00:12:12.270 "listen_addresses": [ 00:12:12.270 { 00:12:12.270 "trtype": "TCP", 00:12:12.270 "adrfam": "IPv4", 00:12:12.270 "traddr": "10.0.0.2", 00:12:12.270 "trsvcid": "4420" 00:12:12.270 } 00:12:12.270 ], 00:12:12.270 "allow_any_host": true, 00:12:12.270 "hosts": [], 00:12:12.270 "serial_number": "SPDK00000000000001", 00:12:12.270 "model_number": "SPDK bdev Controller", 00:12:12.270 "max_namespaces": 32, 00:12:12.270 "min_cntlid": 1, 00:12:12.270 "max_cntlid": 65519, 00:12:12.271 "namespaces": [ 00:12:12.271 { 00:12:12.271 "nsid": 1, 00:12:12.271 "bdev_name": "Null1", 00:12:12.271 "name": "Null1", 00:12:12.271 "nguid": "A49D4058C6B041D58B6F668CF6A5F5AF", 00:12:12.271 "uuid": "a49d4058-c6b0-41d5-8b6f-668cf6a5f5af" 00:12:12.271 } 00:12:12.271 ] 00:12:12.271 }, 00:12:12.271 { 00:12:12.271 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:12.271 "subtype": "NVMe", 00:12:12.271 "listen_addresses": [ 00:12:12.271 { 00:12:12.271 "trtype": "TCP", 00:12:12.271 "adrfam": "IPv4", 00:12:12.271 "traddr": "10.0.0.2", 00:12:12.271 "trsvcid": "4420" 00:12:12.271 } 00:12:12.271 ], 00:12:12.271 "allow_any_host": true, 00:12:12.271 "hosts": [], 00:12:12.271 "serial_number": "SPDK00000000000002", 00:12:12.271 "model_number": "SPDK bdev Controller", 00:12:12.271 "max_namespaces": 32, 00:12:12.271 "min_cntlid": 1, 00:12:12.271 "max_cntlid": 65519, 00:12:12.271 "namespaces": [ 00:12:12.271 { 00:12:12.271 "nsid": 1, 00:12:12.271 "bdev_name": "Null2", 00:12:12.271 "name": "Null2", 00:12:12.271 "nguid": "F1305A205E0E49219F21DE07ADFCCB81", 00:12:12.271 "uuid": "f1305a20-5e0e-4921-9f21-de07adfccb81" 00:12:12.271 } 00:12:12.271 ] 00:12:12.271 }, 00:12:12.271 { 00:12:12.271 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:12.271 "subtype": "NVMe", 00:12:12.271 "listen_addresses": [ 00:12:12.271 { 00:12:12.271 "trtype": "TCP", 00:12:12.271 "adrfam": "IPv4", 00:12:12.271 "traddr": "10.0.0.2", 00:12:12.271 "trsvcid": "4420" 00:12:12.271 } 00:12:12.271 ], 00:12:12.271 "allow_any_host": true, 00:12:12.271 "hosts": [], 00:12:12.271 "serial_number": "SPDK00000000000003", 00:12:12.271 "model_number": "SPDK bdev Controller", 00:12:12.271 "max_namespaces": 32, 00:12:12.271 "min_cntlid": 1, 00:12:12.271 "max_cntlid": 65519, 00:12:12.271 "namespaces": [ 00:12:12.271 { 00:12:12.271 "nsid": 1, 00:12:12.271 "bdev_name": "Null3", 00:12:12.271 "name": "Null3", 00:12:12.271 "nguid": "FF20FC9CE1E24C48A1641D21F970200D", 00:12:12.271 "uuid": "ff20fc9c-e1e2-4c48-a164-1d21f970200d" 00:12:12.271 } 00:12:12.271 ] 00:12:12.271 }, 00:12:12.271 { 00:12:12.271 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:12.271 "subtype": "NVMe", 00:12:12.271 "listen_addresses": [ 00:12:12.271 { 00:12:12.271 "trtype": "TCP", 00:12:12.271 "adrfam": "IPv4", 00:12:12.271 "traddr": "10.0.0.2", 00:12:12.271 "trsvcid": "4420" 00:12:12.271 } 00:12:12.271 ], 00:12:12.271 "allow_any_host": true, 00:12:12.271 "hosts": [], 00:12:12.271 "serial_number": "SPDK00000000000004", 00:12:12.271 "model_number": "SPDK bdev Controller", 00:12:12.271 "max_namespaces": 32, 00:12:12.271 "min_cntlid": 1, 00:12:12.271 "max_cntlid": 65519, 00:12:12.271 "namespaces": [ 00:12:12.271 { 00:12:12.271 "nsid": 1, 00:12:12.271 "bdev_name": "Null4", 00:12:12.271 "name": "Null4", 00:12:12.271 "nguid": "C9C1DF75698B4F08AEF3DAF228E4A087", 00:12:12.271 "uuid": "c9c1df75-698b-4f08-aef3-daf228e4a087" 00:12:12.271 } 00:12:12.271 ] 00:12:12.271 } 00:12:12.271 ] 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.271 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:12.533 rmmod nvme_tcp 00:12:12.533 rmmod nvme_fabrics 00:12:12.533 rmmod nvme_keyring 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 815897 ']' 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 815897 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 815897 ']' 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 815897 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:12.533 13:16:34 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:12.533 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 815897 00:12:12.533 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:12.533 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:12.533 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 815897' 00:12:12.533 killing process with pid 815897 00:12:12.533 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 815897 00:12:12.533 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 815897 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.793 13:16:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.704 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.704 00:12:14.704 real 0m12.189s 00:12:14.704 user 0m8.740s 00:12:14.704 sys 0m6.625s 00:12:14.704 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.704 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 ************************************ 00:12:14.704 END TEST nvmf_target_discovery 00:12:14.704 ************************************ 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.964 ************************************ 00:12:14.964 START TEST nvmf_referrals 00:12:14.964 ************************************ 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:14.964 * Looking for test storage... 00:12:14.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:14.964 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:15.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.225 --rc genhtml_branch_coverage=1 00:12:15.225 --rc genhtml_function_coverage=1 00:12:15.225 --rc genhtml_legend=1 00:12:15.225 --rc geninfo_all_blocks=1 00:12:15.225 --rc geninfo_unexecuted_blocks=1 00:12:15.225 00:12:15.225 ' 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:15.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.225 --rc genhtml_branch_coverage=1 00:12:15.225 --rc genhtml_function_coverage=1 00:12:15.225 --rc genhtml_legend=1 00:12:15.225 --rc geninfo_all_blocks=1 00:12:15.225 --rc geninfo_unexecuted_blocks=1 00:12:15.225 00:12:15.225 ' 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:15.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.225 --rc genhtml_branch_coverage=1 00:12:15.225 --rc genhtml_function_coverage=1 00:12:15.225 --rc genhtml_legend=1 00:12:15.225 --rc geninfo_all_blocks=1 00:12:15.225 --rc geninfo_unexecuted_blocks=1 00:12:15.225 00:12:15.225 ' 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:15.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:15.225 --rc genhtml_branch_coverage=1 00:12:15.225 --rc genhtml_function_coverage=1 00:12:15.225 --rc genhtml_legend=1 00:12:15.225 --rc geninfo_all_blocks=1 00:12:15.225 --rc geninfo_unexecuted_blocks=1 00:12:15.225 00:12:15.225 ' 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.225 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:15.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:15.226 13:16:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:23.372 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:23.372 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:23.372 Found net devices under 0000:31:00.0: cvl_0_0 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:23.372 Found net devices under 0000:31:00.1: cvl_0_1 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.372 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.372 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.732 ms 00:12:23.372 00:12:23.372 --- 10.0.0.2 ping statistics --- 00:12:23.372 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.372 rtt min/avg/max/mdev = 0.732/0.732/0.732/0.000 ms 00:12:23.372 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.372 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.373 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:12:23.373 00:12:23.373 --- 10.0.0.1 ping statistics --- 00:12:23.373 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.373 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=820976 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 820976 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 820976 ']' 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.373 13:16:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.634 [2024-12-05 13:16:45.946785] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:12:23.634 [2024-12-05 13:16:45.946857] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.634 [2024-12-05 13:16:46.039507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.634 [2024-12-05 13:16:46.081731] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.634 [2024-12-05 13:16:46.081767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.634 [2024-12-05 13:16:46.081775] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.634 [2024-12-05 13:16:46.081782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.634 [2024-12-05 13:16:46.081788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.634 [2024-12-05 13:16:46.083392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.634 [2024-12-05 13:16:46.083517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.634 [2024-12-05 13:16:46.083676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.634 [2024-12-05 13:16:46.083676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:24.205 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:24.205 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:24.205 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:24.205 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:24.205 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 [2024-12-05 13:16:46.803511] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 [2024-12-05 13:16:46.832023] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.465 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:24.466 13:16:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:24.726 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:24.986 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.987 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:24.987 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.247 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:25.247 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:25.247 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:25.247 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:25.247 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:25.247 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.247 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:25.507 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:25.507 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:25.507 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:25.507 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:25.507 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.507 13:16:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.768 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:26.029 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:26.029 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:26.029 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:26.029 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:26.029 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.029 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:26.289 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:26.290 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:26.551 13:16:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:26.551 rmmod nvme_tcp 00:12:26.551 rmmod nvme_fabrics 00:12:26.551 rmmod nvme_keyring 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 820976 ']' 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 820976 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 820976 ']' 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 820976 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820976 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:26.551 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820976' 00:12:26.551 killing process with pid 820976 00:12:26.812 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 820976 00:12:26.812 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 820976 00:12:26.812 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:26.812 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:26.812 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:26.812 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:26.812 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:26.813 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:26.813 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:26.813 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.813 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.813 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.813 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.813 13:16:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.359 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.359 00:12:29.359 real 0m13.973s 00:12:29.359 user 0m15.882s 00:12:29.359 sys 0m7.130s 00:12:29.359 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.359 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.359 ************************************ 00:12:29.359 END TEST nvmf_referrals 00:12:29.359 ************************************ 00:12:29.359 13:16:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.359 13:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:29.359 13:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.359 13:16:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:29.359 ************************************ 00:12:29.359 START TEST nvmf_connect_disconnect 00:12:29.359 ************************************ 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:29.360 * Looking for test storage... 00:12:29.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.360 --rc genhtml_branch_coverage=1 00:12:29.360 --rc genhtml_function_coverage=1 00:12:29.360 --rc genhtml_legend=1 00:12:29.360 --rc geninfo_all_blocks=1 00:12:29.360 --rc geninfo_unexecuted_blocks=1 00:12:29.360 00:12:29.360 ' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.360 --rc genhtml_branch_coverage=1 00:12:29.360 --rc genhtml_function_coverage=1 00:12:29.360 --rc genhtml_legend=1 00:12:29.360 --rc geninfo_all_blocks=1 00:12:29.360 --rc geninfo_unexecuted_blocks=1 00:12:29.360 00:12:29.360 ' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.360 --rc genhtml_branch_coverage=1 00:12:29.360 --rc genhtml_function_coverage=1 00:12:29.360 --rc genhtml_legend=1 00:12:29.360 --rc geninfo_all_blocks=1 00:12:29.360 --rc geninfo_unexecuted_blocks=1 00:12:29.360 00:12:29.360 ' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:29.360 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.360 --rc genhtml_branch_coverage=1 00:12:29.360 --rc genhtml_function_coverage=1 00:12:29.360 --rc genhtml_legend=1 00:12:29.360 --rc geninfo_all_blocks=1 00:12:29.360 --rc geninfo_unexecuted_blocks=1 00:12:29.360 00:12:29.360 ' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.360 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.361 13:16:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:37.512 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:37.512 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.512 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:37.513 Found net devices under 0000:31:00.0: cvl_0_0 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:37.513 Found net devices under 0000:31:00.1: cvl_0_1 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.513 13:16:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:37.513 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:37.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:12:37.775 00:12:37.775 --- 10.0.0.2 ping statistics --- 00:12:37.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.775 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:12:37.775 00:12:37.775 --- 10.0.0.1 ping statistics --- 00:12:37.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.775 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=826434 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 826434 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 826434 ']' 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.775 13:17:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:37.775 [2024-12-05 13:17:00.255366] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:12:37.775 [2024-12-05 13:17:00.255438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.038 [2024-12-05 13:17:00.346922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.038 [2024-12-05 13:17:00.388423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.038 [2024-12-05 13:17:00.388461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.038 [2024-12-05 13:17:00.388469] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.038 [2024-12-05 13:17:00.388476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.038 [2024-12-05 13:17:00.388482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.038 [2024-12-05 13:17:00.390032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.038 [2024-12-05 13:17:00.390137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.038 [2024-12-05 13:17:00.390294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.038 [2024-12-05 13:17:00.390294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.610 [2024-12-05 13:17:01.110533] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.610 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:38.872 [2024-12-05 13:17:01.180318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.872 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.872 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:38.872 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:38.872 13:17:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:43.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.381 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:57.381 rmmod nvme_tcp 00:12:57.381 rmmod nvme_fabrics 00:12:57.381 rmmod nvme_keyring 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 826434 ']' 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 826434 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 826434 ']' 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 826434 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 826434 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 826434' 00:12:57.381 killing process with pid 826434 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 826434 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 826434 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:57.381 13:17:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.297 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.297 00:12:59.297 real 0m30.457s 00:12:59.297 user 1m19.532s 00:12:59.297 sys 0m7.929s 00:12:59.297 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.297 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:59.297 ************************************ 00:12:59.297 END TEST nvmf_connect_disconnect 00:12:59.297 ************************************ 00:12:59.559 13:17:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:59.559 13:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:59.559 13:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.559 13:17:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.559 ************************************ 00:12:59.559 START TEST nvmf_multitarget 00:12:59.559 ************************************ 00:12:59.559 13:17:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:59.559 * Looking for test storage... 00:12:59.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:59.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.559 --rc genhtml_branch_coverage=1 00:12:59.559 --rc genhtml_function_coverage=1 00:12:59.559 --rc genhtml_legend=1 00:12:59.559 --rc geninfo_all_blocks=1 00:12:59.559 --rc geninfo_unexecuted_blocks=1 00:12:59.559 00:12:59.559 ' 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:59.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.559 --rc genhtml_branch_coverage=1 00:12:59.559 --rc genhtml_function_coverage=1 00:12:59.559 --rc genhtml_legend=1 00:12:59.559 --rc geninfo_all_blocks=1 00:12:59.559 --rc geninfo_unexecuted_blocks=1 00:12:59.559 00:12:59.559 ' 00:12:59.559 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:59.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.559 --rc genhtml_branch_coverage=1 00:12:59.559 --rc genhtml_function_coverage=1 00:12:59.559 --rc genhtml_legend=1 00:12:59.560 --rc geninfo_all_blocks=1 00:12:59.560 --rc geninfo_unexecuted_blocks=1 00:12:59.560 00:12:59.560 ' 00:12:59.560 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:59.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.560 --rc genhtml_branch_coverage=1 00:12:59.560 --rc genhtml_function_coverage=1 00:12:59.560 --rc genhtml_legend=1 00:12:59.560 --rc geninfo_all_blocks=1 00:12:59.560 --rc geninfo_unexecuted_blocks=1 00:12:59.560 00:12:59.560 ' 00:12:59.560 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.560 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:59.822 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.822 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.822 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.822 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.822 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.822 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.822 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.823 13:17:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:07.970 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:07.970 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:07.970 Found net devices under 0000:31:00.0: cvl_0_0 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.970 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:07.971 Found net devices under 0000:31:00.1: cvl_0_1 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:07.971 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:08.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:13:08.232 00:13:08.232 --- 10.0.0.2 ping statistics --- 00:13:08.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.232 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:13:08.232 00:13:08.232 --- 10.0.0.1 ping statistics --- 00:13:08.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.232 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=834932 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 834932 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 834932 ']' 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.232 13:17:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:08.232 [2024-12-05 13:17:30.692417] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:13:08.232 [2024-12-05 13:17:30.692472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.232 [2024-12-05 13:17:30.779914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.493 [2024-12-05 13:17:30.818445] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.493 [2024-12-05 13:17:30.818475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.493 [2024-12-05 13:17:30.818483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.493 [2024-12-05 13:17:30.818490] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.493 [2024-12-05 13:17:30.818496] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.493 [2024-12-05 13:17:30.820312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.493 [2024-12-05 13:17:30.820428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.493 [2024-12-05 13:17:30.820580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.493 [2024-12-05 13:17:30.820581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:09.066 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:09.327 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:09.327 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:09.327 "nvmf_tgt_1" 00:13:09.327 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:09.327 "nvmf_tgt_2" 00:13:09.327 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:09.327 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:09.589 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:09.589 13:17:31 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:09.589 true 00:13:09.589 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:09.589 true 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:09.850 rmmod nvme_tcp 00:13:09.850 rmmod nvme_fabrics 00:13:09.850 rmmod nvme_keyring 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 834932 ']' 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 834932 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 834932 ']' 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 834932 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.850 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 834932 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 834932' 00:13:10.111 killing process with pid 834932 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 834932 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 834932 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.111 13:17:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:12.658 00:13:12.658 real 0m12.708s 00:13:12.658 user 0m10.178s 00:13:12.658 sys 0m6.767s 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:12.658 ************************************ 00:13:12.658 END TEST nvmf_multitarget 00:13:12.658 ************************************ 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:12.658 ************************************ 00:13:12.658 START TEST nvmf_rpc 00:13:12.658 ************************************ 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:12.658 * Looking for test storage... 00:13:12.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:12.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.658 --rc genhtml_branch_coverage=1 00:13:12.658 --rc genhtml_function_coverage=1 00:13:12.658 --rc genhtml_legend=1 00:13:12.658 --rc geninfo_all_blocks=1 00:13:12.658 --rc geninfo_unexecuted_blocks=1 00:13:12.658 00:13:12.658 ' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:12.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.658 --rc genhtml_branch_coverage=1 00:13:12.658 --rc genhtml_function_coverage=1 00:13:12.658 --rc genhtml_legend=1 00:13:12.658 --rc geninfo_all_blocks=1 00:13:12.658 --rc geninfo_unexecuted_blocks=1 00:13:12.658 00:13:12.658 ' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:12.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.658 --rc genhtml_branch_coverage=1 00:13:12.658 --rc genhtml_function_coverage=1 00:13:12.658 --rc genhtml_legend=1 00:13:12.658 --rc geninfo_all_blocks=1 00:13:12.658 --rc geninfo_unexecuted_blocks=1 00:13:12.658 00:13:12.658 ' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:12.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:12.658 --rc genhtml_branch_coverage=1 00:13:12.658 --rc genhtml_function_coverage=1 00:13:12.658 --rc genhtml_legend=1 00:13:12.658 --rc geninfo_all_blocks=1 00:13:12.658 --rc geninfo_unexecuted_blocks=1 00:13:12.658 00:13:12.658 ' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:12.658 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:12.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:12.659 13:17:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.803 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:20.804 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:20.804 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:20.804 Found net devices under 0000:31:00.0: cvl_0_0 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:20.804 Found net devices under 0000:31:00.1: cvl_0_1 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:20.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:13:20.804 00:13:20.804 --- 10.0.0.2 ping statistics --- 00:13:20.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.804 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:13:20.804 00:13:20.804 --- 10.0.0.1 ping statistics --- 00:13:20.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.804 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:20.804 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=839976 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 839976 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 839976 ']' 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:20.805 13:17:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.805 [2024-12-05 13:17:42.588973] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:13:20.805 [2024-12-05 13:17:42.589029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.805 [2024-12-05 13:17:42.678621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.805 [2024-12-05 13:17:42.716589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.805 [2024-12-05 13:17:42.716626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.805 [2024-12-05 13:17:42.716635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.805 [2024-12-05 13:17:42.716641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.805 [2024-12-05 13:17:42.716647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.805 [2024-12-05 13:17:42.718118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.805 [2024-12-05 13:17:42.718231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.805 [2024-12-05 13:17:42.718370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.805 [2024-12-05 13:17:42.718372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.805 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.805 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:20.805 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:20.805 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:20.805 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.066 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.066 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:21.067 "tick_rate": 2400000000, 00:13:21.067 "poll_groups": [ 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_000", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [] 00:13:21.067 }, 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_001", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [] 00:13:21.067 }, 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_002", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [] 00:13:21.067 }, 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_003", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [] 00:13:21.067 } 00:13:21.067 ] 00:13:21.067 }' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.067 [2024-12-05 13:17:43.525944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:21.067 "tick_rate": 2400000000, 00:13:21.067 "poll_groups": [ 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_000", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [ 00:13:21.067 { 00:13:21.067 "trtype": "TCP" 00:13:21.067 } 00:13:21.067 ] 00:13:21.067 }, 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_001", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [ 00:13:21.067 { 00:13:21.067 "trtype": "TCP" 00:13:21.067 } 00:13:21.067 ] 00:13:21.067 }, 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_002", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [ 00:13:21.067 { 00:13:21.067 "trtype": "TCP" 00:13:21.067 } 00:13:21.067 ] 00:13:21.067 }, 00:13:21.067 { 00:13:21.067 "name": "nvmf_tgt_poll_group_003", 00:13:21.067 "admin_qpairs": 0, 00:13:21.067 "io_qpairs": 0, 00:13:21.067 "current_admin_qpairs": 0, 00:13:21.067 "current_io_qpairs": 0, 00:13:21.067 "pending_bdev_io": 0, 00:13:21.067 "completed_nvme_io": 0, 00:13:21.067 "transports": [ 00:13:21.067 { 00:13:21.067 "trtype": "TCP" 00:13:21.067 } 00:13:21.067 ] 00:13:21.067 } 00:13:21.067 ] 00:13:21.067 }' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:21.067 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.329 Malloc1 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.329 [2024-12-05 13:17:43.721372] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:13:21.329 [2024-12-05 13:17:43.758265] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:21.329 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:21.329 could not add new controller: failed to write to nvme-fabrics device 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.329 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.330 13:17:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.252 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.252 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:23.252 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.252 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:23.252 13:17:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:25.167 [2024-12-05 13:17:47.634733] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:13:25.167 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:25.167 could not add new controller: failed to write to nvme-fabrics device 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.167 13:17:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.080 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.080 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:27.080 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.080 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:27.080 13:17:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:28.994 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.994 [2024-12-05 13:17:51.362105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.994 13:17:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:30.905 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:30.905 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:30.905 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:30.905 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:30.905 13:17:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:32.818 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:32.818 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:32.818 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:32.818 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:32.818 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:32.818 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:32.818 13:17:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:32.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.818 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.819 [2024-12-05 13:17:55.118775] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.819 13:17:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:34.200 13:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.200 13:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.200 13:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.200 13:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:34.200 13:17:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.108 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.108 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.108 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.108 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:36.108 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.108 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:36.108 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.368 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.368 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.368 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:36.368 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:36.368 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.368 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:36.368 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.368 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 [2024-12-05 13:17:58.837335] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.369 13:17:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.380 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.380 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:38.380 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.380 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:38.380 13:18:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.291 [2024-12-05 13:18:02.594151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.291 13:18:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.675 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.675 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:41.675 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.675 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:41.675 13:18:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:43.584 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:43.585 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:43.585 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.585 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:43.585 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.585 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:43.585 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 [2024-12-05 13:18:06.275036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.844 13:18:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:45.223 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:45.224 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:13:45.224 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:45.224 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:45.224 13:18:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:47.769 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:47.769 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:47.769 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:47.769 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:47.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 [2024-12-05 13:18:10.007592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 [2024-12-05 13:18:10.079788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 [2024-12-05 13:18:10.152022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.770 [2024-12-05 13:18:10.220208] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.770 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 [2024-12-05 13:18:10.288407] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.771 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:48.032 "tick_rate": 2400000000, 00:13:48.032 "poll_groups": [ 00:13:48.032 { 00:13:48.032 "name": "nvmf_tgt_poll_group_000", 00:13:48.032 "admin_qpairs": 0, 00:13:48.032 "io_qpairs": 224, 00:13:48.032 "current_admin_qpairs": 0, 00:13:48.032 "current_io_qpairs": 0, 00:13:48.032 "pending_bdev_io": 0, 00:13:48.032 "completed_nvme_io": 400, 00:13:48.032 "transports": [ 00:13:48.032 { 00:13:48.032 "trtype": "TCP" 00:13:48.032 } 00:13:48.032 ] 00:13:48.032 }, 00:13:48.032 { 00:13:48.032 "name": "nvmf_tgt_poll_group_001", 00:13:48.032 "admin_qpairs": 1, 00:13:48.032 "io_qpairs": 223, 00:13:48.032 "current_admin_qpairs": 0, 00:13:48.032 "current_io_qpairs": 0, 00:13:48.032 "pending_bdev_io": 0, 00:13:48.032 "completed_nvme_io": 346, 00:13:48.032 "transports": [ 00:13:48.032 { 00:13:48.032 "trtype": "TCP" 00:13:48.032 } 00:13:48.032 ] 00:13:48.032 }, 00:13:48.032 { 00:13:48.032 "name": "nvmf_tgt_poll_group_002", 00:13:48.032 "admin_qpairs": 6, 00:13:48.032 "io_qpairs": 218, 00:13:48.032 "current_admin_qpairs": 0, 00:13:48.032 "current_io_qpairs": 0, 00:13:48.032 "pending_bdev_io": 0, 00:13:48.032 "completed_nvme_io": 220, 00:13:48.032 "transports": [ 00:13:48.032 { 00:13:48.032 "trtype": "TCP" 00:13:48.032 } 00:13:48.032 ] 00:13:48.032 }, 00:13:48.032 { 00:13:48.032 "name": "nvmf_tgt_poll_group_003", 00:13:48.032 "admin_qpairs": 0, 00:13:48.032 "io_qpairs": 224, 00:13:48.032 "current_admin_qpairs": 0, 00:13:48.032 "current_io_qpairs": 0, 00:13:48.032 "pending_bdev_io": 0, 00:13:48.032 "completed_nvme_io": 273, 00:13:48.032 "transports": [ 00:13:48.032 { 00:13:48.032 "trtype": "TCP" 00:13:48.032 } 00:13:48.032 ] 00:13:48.032 } 00:13:48.032 ] 00:13:48.032 }' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:48.032 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:48.032 rmmod nvme_tcp 00:13:48.032 rmmod nvme_fabrics 00:13:48.032 rmmod nvme_keyring 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 839976 ']' 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 839976 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 839976 ']' 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 839976 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 839976 00:13:48.033 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 839976' 00:13:48.293 killing process with pid 839976 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 839976 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 839976 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:48.293 13:18:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:50.837 00:13:50.837 real 0m38.105s 00:13:50.837 user 1m53.686s 00:13:50.837 sys 0m8.086s 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:50.837 ************************************ 00:13:50.837 END TEST nvmf_rpc 00:13:50.837 ************************************ 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:50.837 ************************************ 00:13:50.837 START TEST nvmf_invalid 00:13:50.837 ************************************ 00:13:50.837 13:18:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:50.837 * Looking for test storage... 00:13:50.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:50.837 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:50.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.838 --rc genhtml_branch_coverage=1 00:13:50.838 --rc genhtml_function_coverage=1 00:13:50.838 --rc genhtml_legend=1 00:13:50.838 --rc geninfo_all_blocks=1 00:13:50.838 --rc geninfo_unexecuted_blocks=1 00:13:50.838 00:13:50.838 ' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:50.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.838 --rc genhtml_branch_coverage=1 00:13:50.838 --rc genhtml_function_coverage=1 00:13:50.838 --rc genhtml_legend=1 00:13:50.838 --rc geninfo_all_blocks=1 00:13:50.838 --rc geninfo_unexecuted_blocks=1 00:13:50.838 00:13:50.838 ' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:50.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.838 --rc genhtml_branch_coverage=1 00:13:50.838 --rc genhtml_function_coverage=1 00:13:50.838 --rc genhtml_legend=1 00:13:50.838 --rc geninfo_all_blocks=1 00:13:50.838 --rc geninfo_unexecuted_blocks=1 00:13:50.838 00:13:50.838 ' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:50.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:50.838 --rc genhtml_branch_coverage=1 00:13:50.838 --rc genhtml_function_coverage=1 00:13:50.838 --rc genhtml_legend=1 00:13:50.838 --rc geninfo_all_blocks=1 00:13:50.838 --rc geninfo_unexecuted_blocks=1 00:13:50.838 00:13:50.838 ' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:50.838 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:50.838 13:18:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:13:58.984 Found 0000:31:00.0 (0x8086 - 0x159b) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:13:58.984 Found 0000:31:00.1 (0x8086 - 0x159b) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:13:58.984 Found net devices under 0000:31:00.0: cvl_0_0 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.984 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:13:58.985 Found net devices under 0000:31:00.1: cvl_0_1 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:58.985 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:59.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:13:59.247 00:13:59.247 --- 10.0.0.2 ping statistics --- 00:13:59.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.247 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:13:59.247 00:13:59.247 --- 10.0.0.1 ping statistics --- 00:13:59.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.247 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=850748 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 850748 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 850748 ']' 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.247 13:18:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.247 [2024-12-05 13:18:21.722666] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:13:59.247 [2024-12-05 13:18:21.722765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.509 [2024-12-05 13:18:21.817500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.509 [2024-12-05 13:18:21.859936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.509 [2024-12-05 13:18:21.859973] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.509 [2024-12-05 13:18:21.859983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.509 [2024-12-05 13:18:21.859992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.509 [2024-12-05 13:18:21.859999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.509 [2024-12-05 13:18:21.861893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.509 [2024-12-05 13:18:21.862129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.509 [2024-12-05 13:18:21.862130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.509 [2024-12-05 13:18:21.861984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:00.081 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22745 00:14:00.341 [2024-12-05 13:18:22.726895] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:00.341 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:00.341 { 00:14:00.341 "nqn": "nqn.2016-06.io.spdk:cnode22745", 00:14:00.341 "tgt_name": "foobar", 00:14:00.341 "method": "nvmf_create_subsystem", 00:14:00.341 "req_id": 1 00:14:00.341 } 00:14:00.341 Got JSON-RPC error response 00:14:00.341 response: 00:14:00.341 { 00:14:00.341 "code": -32603, 00:14:00.341 "message": "Unable to find target foobar" 00:14:00.341 }' 00:14:00.341 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:00.341 { 00:14:00.341 "nqn": "nqn.2016-06.io.spdk:cnode22745", 00:14:00.341 "tgt_name": "foobar", 00:14:00.341 "method": "nvmf_create_subsystem", 00:14:00.341 "req_id": 1 00:14:00.341 } 00:14:00.341 Got JSON-RPC error response 00:14:00.341 response: 00:14:00.341 { 00:14:00.341 "code": -32603, 00:14:00.341 "message": "Unable to find target foobar" 00:14:00.341 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:00.341 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:00.341 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode32334 00:14:00.602 [2024-12-05 13:18:22.919576] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32334: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:00.602 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:00.602 { 00:14:00.602 "nqn": "nqn.2016-06.io.spdk:cnode32334", 00:14:00.602 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.602 "method": "nvmf_create_subsystem", 00:14:00.602 "req_id": 1 00:14:00.602 } 00:14:00.602 Got JSON-RPC error response 00:14:00.602 response: 00:14:00.602 { 00:14:00.602 "code": -32602, 00:14:00.602 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.602 }' 00:14:00.602 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:00.602 { 00:14:00.602 "nqn": "nqn.2016-06.io.spdk:cnode32334", 00:14:00.602 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.602 "method": "nvmf_create_subsystem", 00:14:00.602 "req_id": 1 00:14:00.602 } 00:14:00.602 Got JSON-RPC error response 00:14:00.602 response: 00:14:00.602 { 00:14:00.602 "code": -32602, 00:14:00.602 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.602 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:00.602 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:00.602 13:18:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode12387 00:14:00.602 [2024-12-05 13:18:23.108101] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12387: invalid model number 'SPDK_Controller' 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:00.602 { 00:14:00.602 "nqn": "nqn.2016-06.io.spdk:cnode12387", 00:14:00.602 "model_number": "SPDK_Controller\u001f", 00:14:00.602 "method": "nvmf_create_subsystem", 00:14:00.602 "req_id": 1 00:14:00.602 } 00:14:00.602 Got JSON-RPC error response 00:14:00.602 response: 00:14:00.602 { 00:14:00.602 "code": -32602, 00:14:00.602 "message": "Invalid MN SPDK_Controller\u001f" 00:14:00.602 }' 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:00.602 { 00:14:00.602 "nqn": "nqn.2016-06.io.spdk:cnode12387", 00:14:00.602 "model_number": "SPDK_Controller\u001f", 00:14:00.602 "method": "nvmf_create_subsystem", 00:14:00.602 "req_id": 1 00:14:00.602 } 00:14:00.602 Got JSON-RPC error response 00:14:00.602 response: 00:14:00.602 { 00:14:00.602 "code": -32602, 00:14:00.602 "message": "Invalid MN SPDK_Controller\u001f" 00:14:00.602 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.602 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:00.864 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ C == \- ]] 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'CO@<34^|){W|[*.kt8XsG' 00:14:00.865 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'CO@<34^|){W|[*.kt8XsG' nqn.2016-06.io.spdk:cnode7695 00:14:01.127 [2024-12-05 13:18:23.465259] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7695: invalid serial number 'CO@<34^|){W|[*.kt8XsG' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:01.127 { 00:14:01.127 "nqn": "nqn.2016-06.io.spdk:cnode7695", 00:14:01.127 "serial_number": "CO@<34^|){W|[*.kt8XsG", 00:14:01.127 "method": "nvmf_create_subsystem", 00:14:01.127 "req_id": 1 00:14:01.127 } 00:14:01.127 Got JSON-RPC error response 00:14:01.127 response: 00:14:01.127 { 00:14:01.127 "code": -32602, 00:14:01.127 "message": "Invalid SN CO@<34^|){W|[*.kt8XsG" 00:14:01.127 }' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:01.127 { 00:14:01.127 "nqn": "nqn.2016-06.io.spdk:cnode7695", 00:14:01.127 "serial_number": "CO@<34^|){W|[*.kt8XsG", 00:14:01.127 "method": "nvmf_create_subsystem", 00:14:01.127 "req_id": 1 00:14:01.127 } 00:14:01.127 Got JSON-RPC error response 00:14:01.127 response: 00:14:01.127 { 00:14:01.127 "code": -32602, 00:14:01.127 "message": "Invalid SN CO@<34^|){W|[*.kt8XsG" 00:14:01.127 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:01.127 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.128 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:01.390 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ^ == \- ]] 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '^0*~n4,'\''nZeN)t+z'\''yCVF;a6y]Q@K?m,>D_x2k2)' 00:14:01.391 13:18:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '^0*~n4,'\''nZeN)t+z'\''yCVF;a6y]Q@K?m,>D_x2k2)' nqn.2016-06.io.spdk:cnode14722 00:14:01.653 [2024-12-05 13:18:23.970892] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14722: invalid model number '^0*~n4,'nZeN)t+z'yCVF;a6y]Q@K?m,>D_x2k2)' 00:14:01.653 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:01.653 { 00:14:01.653 "nqn": "nqn.2016-06.io.spdk:cnode14722", 00:14:01.653 "model_number": "^0*~n4,'\''nZeN\u007f)t+z'\''yCVF;a6y]Q@K?m,>D_x2k2)", 00:14:01.653 "method": "nvmf_create_subsystem", 00:14:01.653 "req_id": 1 00:14:01.653 } 00:14:01.653 Got JSON-RPC error response 00:14:01.653 response: 00:14:01.653 { 00:14:01.653 "code": -32602, 00:14:01.653 "message": "Invalid MN ^0*~n4,'\''nZeN\u007f)t+z'\''yCVF;a6y]Q@K?m,>D_x2k2)" 00:14:01.653 }' 00:14:01.653 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:01.653 { 00:14:01.653 "nqn": "nqn.2016-06.io.spdk:cnode14722", 00:14:01.653 "model_number": "^0*~n4,'nZeN\u007f)t+z'yCVF;a6y]Q@K?m,>D_x2k2)", 00:14:01.653 "method": "nvmf_create_subsystem", 00:14:01.653 "req_id": 1 00:14:01.653 } 00:14:01.653 Got JSON-RPC error response 00:14:01.653 response: 00:14:01.653 { 00:14:01.653 "code": -32602, 00:14:01.653 "message": "Invalid MN ^0*~n4,'nZeN\u007f)t+z'yCVF;a6y]Q@K?m,>D_x2k2)" 00:14:01.653 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:01.653 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:01.653 [2024-12-05 13:18:24.159553] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.653 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:01.914 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:01.914 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:01.914 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:01.914 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:01.914 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:02.175 [2024-12-05 13:18:24.542082] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:02.175 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:02.175 { 00:14:02.175 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:02.175 "listen_address": { 00:14:02.175 "trtype": "tcp", 00:14:02.175 "traddr": "", 00:14:02.175 "trsvcid": "4421" 00:14:02.175 }, 00:14:02.175 "method": "nvmf_subsystem_remove_listener", 00:14:02.175 "req_id": 1 00:14:02.175 } 00:14:02.175 Got JSON-RPC error response 00:14:02.175 response: 00:14:02.175 { 00:14:02.175 "code": -32602, 00:14:02.175 "message": "Invalid parameters" 00:14:02.175 }' 00:14:02.175 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:02.175 { 00:14:02.175 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:02.175 "listen_address": { 00:14:02.175 "trtype": "tcp", 00:14:02.175 "traddr": "", 00:14:02.175 "trsvcid": "4421" 00:14:02.175 }, 00:14:02.175 "method": "nvmf_subsystem_remove_listener", 00:14:02.175 "req_id": 1 00:14:02.175 } 00:14:02.175 Got JSON-RPC error response 00:14:02.175 response: 00:14:02.175 { 00:14:02.175 "code": -32602, 00:14:02.175 "message": "Invalid parameters" 00:14:02.175 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:02.175 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25350 -i 0 00:14:02.175 [2024-12-05 13:18:24.730644] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25350: invalid cntlid range [0-65519] 00:14:02.436 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:02.436 { 00:14:02.436 "nqn": "nqn.2016-06.io.spdk:cnode25350", 00:14:02.436 "min_cntlid": 0, 00:14:02.436 "method": "nvmf_create_subsystem", 00:14:02.436 "req_id": 1 00:14:02.436 } 00:14:02.436 Got JSON-RPC error response 00:14:02.436 response: 00:14:02.436 { 00:14:02.436 "code": -32602, 00:14:02.436 "message": "Invalid cntlid range [0-65519]" 00:14:02.436 }' 00:14:02.436 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:02.436 { 00:14:02.436 "nqn": "nqn.2016-06.io.spdk:cnode25350", 00:14:02.436 "min_cntlid": 0, 00:14:02.436 "method": "nvmf_create_subsystem", 00:14:02.436 "req_id": 1 00:14:02.436 } 00:14:02.436 Got JSON-RPC error response 00:14:02.436 response: 00:14:02.436 { 00:14:02.436 "code": -32602, 00:14:02.436 "message": "Invalid cntlid range [0-65519]" 00:14:02.436 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.436 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5441 -i 65520 00:14:02.436 [2024-12-05 13:18:24.911248] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5441: invalid cntlid range [65520-65519] 00:14:02.436 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:02.436 { 00:14:02.436 "nqn": "nqn.2016-06.io.spdk:cnode5441", 00:14:02.436 "min_cntlid": 65520, 00:14:02.436 "method": "nvmf_create_subsystem", 00:14:02.436 "req_id": 1 00:14:02.436 } 00:14:02.436 Got JSON-RPC error response 00:14:02.436 response: 00:14:02.436 { 00:14:02.436 "code": -32602, 00:14:02.436 "message": "Invalid cntlid range [65520-65519]" 00:14:02.436 }' 00:14:02.436 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:02.436 { 00:14:02.436 "nqn": "nqn.2016-06.io.spdk:cnode5441", 00:14:02.436 "min_cntlid": 65520, 00:14:02.436 "method": "nvmf_create_subsystem", 00:14:02.436 "req_id": 1 00:14:02.436 } 00:14:02.436 Got JSON-RPC error response 00:14:02.436 response: 00:14:02.436 { 00:14:02.436 "code": -32602, 00:14:02.436 "message": "Invalid cntlid range [65520-65519]" 00:14:02.436 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.436 13:18:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13085 -I 0 00:14:02.696 [2024-12-05 13:18:25.099798] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13085: invalid cntlid range [1-0] 00:14:02.696 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:02.696 { 00:14:02.696 "nqn": "nqn.2016-06.io.spdk:cnode13085", 00:14:02.696 "max_cntlid": 0, 00:14:02.696 "method": "nvmf_create_subsystem", 00:14:02.696 "req_id": 1 00:14:02.696 } 00:14:02.696 Got JSON-RPC error response 00:14:02.696 response: 00:14:02.696 { 00:14:02.696 "code": -32602, 00:14:02.696 "message": "Invalid cntlid range [1-0]" 00:14:02.696 }' 00:14:02.696 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:02.696 { 00:14:02.696 "nqn": "nqn.2016-06.io.spdk:cnode13085", 00:14:02.696 "max_cntlid": 0, 00:14:02.696 "method": "nvmf_create_subsystem", 00:14:02.696 "req_id": 1 00:14:02.696 } 00:14:02.696 Got JSON-RPC error response 00:14:02.696 response: 00:14:02.696 { 00:14:02.696 "code": -32602, 00:14:02.696 "message": "Invalid cntlid range [1-0]" 00:14:02.696 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.696 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21132 -I 65520 00:14:02.957 [2024-12-05 13:18:25.288393] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21132: invalid cntlid range [1-65520] 00:14:02.957 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:02.957 { 00:14:02.957 "nqn": "nqn.2016-06.io.spdk:cnode21132", 00:14:02.957 "max_cntlid": 65520, 00:14:02.957 "method": "nvmf_create_subsystem", 00:14:02.957 "req_id": 1 00:14:02.957 } 00:14:02.957 Got JSON-RPC error response 00:14:02.957 response: 00:14:02.957 { 00:14:02.957 "code": -32602, 00:14:02.957 "message": "Invalid cntlid range [1-65520]" 00:14:02.957 }' 00:14:02.957 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:02.957 { 00:14:02.957 "nqn": "nqn.2016-06.io.spdk:cnode21132", 00:14:02.957 "max_cntlid": 65520, 00:14:02.957 "method": "nvmf_create_subsystem", 00:14:02.957 "req_id": 1 00:14:02.957 } 00:14:02.957 Got JSON-RPC error response 00:14:02.957 response: 00:14:02.957 { 00:14:02.957 "code": -32602, 00:14:02.957 "message": "Invalid cntlid range [1-65520]" 00:14:02.957 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.957 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12978 -i 6 -I 5 00:14:02.957 [2024-12-05 13:18:25.477031] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12978: invalid cntlid range [6-5] 00:14:02.957 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:02.957 { 00:14:02.957 "nqn": "nqn.2016-06.io.spdk:cnode12978", 00:14:02.957 "min_cntlid": 6, 00:14:02.957 "max_cntlid": 5, 00:14:02.957 "method": "nvmf_create_subsystem", 00:14:02.957 "req_id": 1 00:14:02.957 } 00:14:02.957 Got JSON-RPC error response 00:14:02.957 response: 00:14:02.957 { 00:14:02.957 "code": -32602, 00:14:02.957 "message": "Invalid cntlid range [6-5]" 00:14:02.957 }' 00:14:02.957 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:02.957 { 00:14:02.957 "nqn": "nqn.2016-06.io.spdk:cnode12978", 00:14:02.957 "min_cntlid": 6, 00:14:02.957 "max_cntlid": 5, 00:14:02.957 "method": "nvmf_create_subsystem", 00:14:02.957 "req_id": 1 00:14:02.957 } 00:14:02.957 Got JSON-RPC error response 00:14:02.957 response: 00:14:02.957 { 00:14:02.957 "code": -32602, 00:14:02.957 "message": "Invalid cntlid range [6-5]" 00:14:02.957 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.957 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:03.218 { 00:14:03.218 "name": "foobar", 00:14:03.218 "method": "nvmf_delete_target", 00:14:03.218 "req_id": 1 00:14:03.218 } 00:14:03.218 Got JSON-RPC error response 00:14:03.218 response: 00:14:03.218 { 00:14:03.218 "code": -32602, 00:14:03.218 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:03.218 }' 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:03.218 { 00:14:03.218 "name": "foobar", 00:14:03.218 "method": "nvmf_delete_target", 00:14:03.218 "req_id": 1 00:14:03.218 } 00:14:03.218 Got JSON-RPC error response 00:14:03.218 response: 00:14:03.218 { 00:14:03.218 "code": -32602, 00:14:03.218 "message": "The specified target doesn't exist, cannot delete it." 00:14:03.218 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:03.218 rmmod nvme_tcp 00:14:03.218 rmmod nvme_fabrics 00:14:03.218 rmmod nvme_keyring 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 850748 ']' 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 850748 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 850748 ']' 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 850748 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 850748 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 850748' 00:14:03.218 killing process with pid 850748 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 850748 00:14:03.218 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 850748 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.479 13:18:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.444 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:05.444 00:14:05.444 real 0m15.061s 00:14:05.444 user 0m20.974s 00:14:05.444 sys 0m7.332s 00:14:05.444 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:05.444 13:18:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:05.444 ************************************ 00:14:05.444 END TEST nvmf_invalid 00:14:05.444 ************************************ 00:14:05.444 13:18:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:05.444 13:18:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:05.444 13:18:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:05.444 13:18:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:05.705 ************************************ 00:14:05.705 START TEST nvmf_connect_stress 00:14:05.705 ************************************ 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:05.705 * Looking for test storage... 00:14:05.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:05.705 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:05.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.706 --rc genhtml_branch_coverage=1 00:14:05.706 --rc genhtml_function_coverage=1 00:14:05.706 --rc genhtml_legend=1 00:14:05.706 --rc geninfo_all_blocks=1 00:14:05.706 --rc geninfo_unexecuted_blocks=1 00:14:05.706 00:14:05.706 ' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:05.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.706 --rc genhtml_branch_coverage=1 00:14:05.706 --rc genhtml_function_coverage=1 00:14:05.706 --rc genhtml_legend=1 00:14:05.706 --rc geninfo_all_blocks=1 00:14:05.706 --rc geninfo_unexecuted_blocks=1 00:14:05.706 00:14:05.706 ' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:05.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.706 --rc genhtml_branch_coverage=1 00:14:05.706 --rc genhtml_function_coverage=1 00:14:05.706 --rc genhtml_legend=1 00:14:05.706 --rc geninfo_all_blocks=1 00:14:05.706 --rc geninfo_unexecuted_blocks=1 00:14:05.706 00:14:05.706 ' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:05.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:05.706 --rc genhtml_branch_coverage=1 00:14:05.706 --rc genhtml_function_coverage=1 00:14:05.706 --rc genhtml_legend=1 00:14:05.706 --rc geninfo_all_blocks=1 00:14:05.706 --rc geninfo_unexecuted_blocks=1 00:14:05.706 00:14:05.706 ' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:05.706 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.706 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.966 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.966 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:05.966 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:05.966 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:05.966 13:18:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:14.109 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:14.109 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.109 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:14.110 Found net devices under 0000:31:00.0: cvl_0_0 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:14.110 Found net devices under 0000:31:00.1: cvl_0_1 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.110 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:14:14.371 00:14:14.371 --- 10.0.0.2 ping statistics --- 00:14:14.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.371 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:14:14.371 00:14:14.371 --- 10.0.0.1 ping statistics --- 00:14:14.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.371 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=856606 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 856606 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 856606 ']' 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:14.371 13:18:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.371 [2024-12-05 13:18:36.933015] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:14:14.371 [2024-12-05 13:18:36.933087] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.632 [2024-12-05 13:18:37.043203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.632 [2024-12-05 13:18:37.093834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.632 [2024-12-05 13:18:37.093892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.632 [2024-12-05 13:18:37.093901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.632 [2024-12-05 13:18:37.093908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.632 [2024-12-05 13:18:37.093914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.632 [2024-12-05 13:18:37.095759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.632 [2024-12-05 13:18:37.095924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.632 [2024-12-05 13:18:37.095924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.204 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.465 [2024-12-05 13:18:37.775968] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.465 [2024-12-05 13:18:37.800299] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.465 NULL1 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=856692 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.465 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.466 13:18:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.727 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.727 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:15.727 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.727 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.727 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.297 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.297 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:16.297 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.297 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.297 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.557 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.557 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:16.557 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.557 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.557 13:18:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.818 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.818 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:16.818 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.818 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.818 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.078 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.078 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:17.078 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.078 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.078 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.339 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.339 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:17.339 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.339 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.339 13:18:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.910 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.910 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:17.910 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.910 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.910 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.171 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.171 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:18.171 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.171 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.171 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.431 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.431 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:18.431 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.431 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.431 13:18:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.692 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.692 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:18.692 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.692 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.692 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.953 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.953 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:18.953 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.953 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.953 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.525 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.525 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:19.525 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.525 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.525 13:18:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.786 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.786 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:19.786 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.786 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.786 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.048 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.048 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:20.048 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.048 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.048 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.309 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.309 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:20.309 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.309 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.309 13:18:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.570 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.570 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:20.570 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.570 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.570 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.142 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.142 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:21.142 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.142 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.142 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.403 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.403 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:21.404 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.404 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.404 13:18:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.664 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.664 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:21.664 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.664 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.664 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.925 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.925 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:21.925 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.925 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.925 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.186 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.186 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:22.186 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.186 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.186 13:18:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.759 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.759 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:22.759 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.759 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.759 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.018 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.019 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:23.019 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.019 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.019 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.278 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.278 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:23.278 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.278 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.278 13:18:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.538 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.538 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:23.538 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.538 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.538 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.108 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.108 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:24.108 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.108 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.108 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.373 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.373 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:24.373 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.373 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.373 13:18:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.634 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.634 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:24.634 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.634 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.634 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.895 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.895 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:24.895 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.895 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.895 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.156 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.156 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:25.156 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:25.156 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.156 13:18:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:25.417 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 856692 00:14:25.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (856692) - No such process 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 856692 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:25.679 rmmod nvme_tcp 00:14:25.679 rmmod nvme_fabrics 00:14:25.679 rmmod nvme_keyring 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 856606 ']' 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 856606 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 856606 ']' 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 856606 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 856606 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 856606' 00:14:25.679 killing process with pid 856606 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 856606 00:14:25.679 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 856606 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.940 13:18:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:27.855 00:14:27.855 real 0m22.301s 00:14:27.855 user 0m42.393s 00:14:27.855 sys 0m10.002s 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.855 ************************************ 00:14:27.855 END TEST nvmf_connect_stress 00:14:27.855 ************************************ 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.855 ************************************ 00:14:27.855 START TEST nvmf_fused_ordering 00:14:27.855 ************************************ 00:14:27.855 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:28.116 * Looking for test storage... 00:14:28.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.117 --rc genhtml_branch_coverage=1 00:14:28.117 --rc genhtml_function_coverage=1 00:14:28.117 --rc genhtml_legend=1 00:14:28.117 --rc geninfo_all_blocks=1 00:14:28.117 --rc geninfo_unexecuted_blocks=1 00:14:28.117 00:14:28.117 ' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.117 --rc genhtml_branch_coverage=1 00:14:28.117 --rc genhtml_function_coverage=1 00:14:28.117 --rc genhtml_legend=1 00:14:28.117 --rc geninfo_all_blocks=1 00:14:28.117 --rc geninfo_unexecuted_blocks=1 00:14:28.117 00:14:28.117 ' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.117 --rc genhtml_branch_coverage=1 00:14:28.117 --rc genhtml_function_coverage=1 00:14:28.117 --rc genhtml_legend=1 00:14:28.117 --rc geninfo_all_blocks=1 00:14:28.117 --rc geninfo_unexecuted_blocks=1 00:14:28.117 00:14:28.117 ' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:28.117 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.117 --rc genhtml_branch_coverage=1 00:14:28.117 --rc genhtml_function_coverage=1 00:14:28.117 --rc genhtml_legend=1 00:14:28.117 --rc geninfo_all_blocks=1 00:14:28.117 --rc geninfo_unexecuted_blocks=1 00:14:28.117 00:14:28.117 ' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.117 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:28.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:28.118 13:18:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:36.263 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.263 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:36.263 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:36.264 Found net devices under 0000:31:00.0: cvl_0_0 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:36.264 Found net devices under 0000:31:00.1: cvl_0_1 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.264 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.525 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.525 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.525 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:36.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:14:36.526 00:14:36.526 --- 10.0.0.2 ping statistics --- 00:14:36.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.526 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:14:36.526 00:14:36.526 --- 10.0.0.1 ping statistics --- 00:14:36.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.526 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:36.526 13:18:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=863519 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 863519 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 863519 ']' 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.526 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:36.788 [2024-12-05 13:18:59.124252] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:14:36.788 [2024-12-05 13:18:59.124316] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.788 [2024-12-05 13:18:59.235719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.788 [2024-12-05 13:18:59.286433] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.788 [2024-12-05 13:18:59.286493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.788 [2024-12-05 13:18:59.286503] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.788 [2024-12-05 13:18:59.286510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.788 [2024-12-05 13:18:59.286517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.788 [2024-12-05 13:18:59.287319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:37.360 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:37.360 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:37.360 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:37.360 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:37.360 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 [2024-12-05 13:18:59.974263] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.622 13:18:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 [2024-12-05 13:18:59.998599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 NULL1 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.622 13:19:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:37.622 [2024-12-05 13:19:00.070191] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:14:37.622 [2024-12-05 13:19:00.070261] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863701 ] 00:14:37.911 Attached to nqn.2016-06.io.spdk:cnode1 00:14:37.911 Namespace ID: 1 size: 1GB 00:14:37.911 fused_ordering(0) 00:14:37.911 fused_ordering(1) 00:14:37.911 fused_ordering(2) 00:14:37.911 fused_ordering(3) 00:14:37.911 fused_ordering(4) 00:14:37.911 fused_ordering(5) 00:14:37.911 fused_ordering(6) 00:14:37.911 fused_ordering(7) 00:14:37.911 fused_ordering(8) 00:14:37.911 fused_ordering(9) 00:14:37.911 fused_ordering(10) 00:14:37.911 fused_ordering(11) 00:14:37.911 fused_ordering(12) 00:14:37.911 fused_ordering(13) 00:14:37.911 fused_ordering(14) 00:14:37.911 fused_ordering(15) 00:14:37.911 fused_ordering(16) 00:14:37.911 fused_ordering(17) 00:14:37.911 fused_ordering(18) 00:14:37.911 fused_ordering(19) 00:14:37.911 fused_ordering(20) 00:14:37.911 fused_ordering(21) 00:14:37.911 fused_ordering(22) 00:14:37.911 fused_ordering(23) 00:14:37.911 fused_ordering(24) 00:14:37.911 fused_ordering(25) 00:14:37.911 fused_ordering(26) 00:14:37.911 fused_ordering(27) 00:14:37.911 fused_ordering(28) 00:14:37.911 fused_ordering(29) 00:14:37.911 fused_ordering(30) 00:14:37.911 fused_ordering(31) 00:14:37.911 fused_ordering(32) 00:14:37.911 fused_ordering(33) 00:14:37.911 fused_ordering(34) 00:14:37.911 fused_ordering(35) 00:14:37.911 fused_ordering(36) 00:14:37.911 fused_ordering(37) 00:14:37.911 fused_ordering(38) 00:14:37.911 fused_ordering(39) 00:14:37.911 fused_ordering(40) 00:14:37.911 fused_ordering(41) 00:14:37.911 fused_ordering(42) 00:14:37.911 fused_ordering(43) 00:14:37.911 fused_ordering(44) 00:14:37.911 fused_ordering(45) 00:14:37.911 fused_ordering(46) 00:14:37.911 fused_ordering(47) 00:14:37.911 fused_ordering(48) 00:14:37.911 fused_ordering(49) 00:14:37.911 fused_ordering(50) 00:14:37.911 fused_ordering(51) 00:14:37.911 fused_ordering(52) 00:14:37.911 fused_ordering(53) 00:14:37.911 fused_ordering(54) 00:14:37.911 fused_ordering(55) 00:14:37.911 fused_ordering(56) 00:14:37.911 fused_ordering(57) 00:14:37.911 fused_ordering(58) 00:14:37.911 fused_ordering(59) 00:14:37.911 fused_ordering(60) 00:14:37.911 fused_ordering(61) 00:14:37.911 fused_ordering(62) 00:14:37.911 fused_ordering(63) 00:14:37.911 fused_ordering(64) 00:14:37.911 fused_ordering(65) 00:14:37.911 fused_ordering(66) 00:14:37.911 fused_ordering(67) 00:14:37.911 fused_ordering(68) 00:14:37.911 fused_ordering(69) 00:14:37.911 fused_ordering(70) 00:14:37.911 fused_ordering(71) 00:14:37.911 fused_ordering(72) 00:14:37.911 fused_ordering(73) 00:14:37.911 fused_ordering(74) 00:14:37.911 fused_ordering(75) 00:14:37.911 fused_ordering(76) 00:14:37.912 fused_ordering(77) 00:14:37.912 fused_ordering(78) 00:14:37.912 fused_ordering(79) 00:14:37.912 fused_ordering(80) 00:14:37.912 fused_ordering(81) 00:14:37.912 fused_ordering(82) 00:14:37.912 fused_ordering(83) 00:14:37.912 fused_ordering(84) 00:14:37.912 fused_ordering(85) 00:14:37.912 fused_ordering(86) 00:14:37.912 fused_ordering(87) 00:14:37.912 fused_ordering(88) 00:14:37.912 fused_ordering(89) 00:14:37.912 fused_ordering(90) 00:14:37.912 fused_ordering(91) 00:14:37.912 fused_ordering(92) 00:14:37.912 fused_ordering(93) 00:14:37.912 fused_ordering(94) 00:14:37.912 fused_ordering(95) 00:14:37.912 fused_ordering(96) 00:14:37.912 fused_ordering(97) 00:14:37.912 fused_ordering(98) 00:14:37.912 fused_ordering(99) 00:14:37.912 fused_ordering(100) 00:14:37.912 fused_ordering(101) 00:14:37.912 fused_ordering(102) 00:14:37.912 fused_ordering(103) 00:14:37.912 fused_ordering(104) 00:14:37.912 fused_ordering(105) 00:14:37.912 fused_ordering(106) 00:14:37.912 fused_ordering(107) 00:14:37.912 fused_ordering(108) 00:14:37.912 fused_ordering(109) 00:14:37.912 fused_ordering(110) 00:14:37.912 fused_ordering(111) 00:14:37.912 fused_ordering(112) 00:14:37.912 fused_ordering(113) 00:14:37.912 fused_ordering(114) 00:14:37.912 fused_ordering(115) 00:14:37.912 fused_ordering(116) 00:14:37.912 fused_ordering(117) 00:14:37.912 fused_ordering(118) 00:14:37.912 fused_ordering(119) 00:14:37.912 fused_ordering(120) 00:14:37.912 fused_ordering(121) 00:14:37.912 fused_ordering(122) 00:14:37.912 fused_ordering(123) 00:14:37.912 fused_ordering(124) 00:14:37.912 fused_ordering(125) 00:14:37.912 fused_ordering(126) 00:14:37.912 fused_ordering(127) 00:14:37.912 fused_ordering(128) 00:14:37.912 fused_ordering(129) 00:14:37.912 fused_ordering(130) 00:14:37.912 fused_ordering(131) 00:14:37.912 fused_ordering(132) 00:14:37.912 fused_ordering(133) 00:14:37.912 fused_ordering(134) 00:14:37.912 fused_ordering(135) 00:14:37.912 fused_ordering(136) 00:14:37.912 fused_ordering(137) 00:14:37.912 fused_ordering(138) 00:14:37.912 fused_ordering(139) 00:14:37.912 fused_ordering(140) 00:14:37.912 fused_ordering(141) 00:14:37.912 fused_ordering(142) 00:14:37.912 fused_ordering(143) 00:14:37.912 fused_ordering(144) 00:14:37.912 fused_ordering(145) 00:14:37.912 fused_ordering(146) 00:14:37.912 fused_ordering(147) 00:14:37.912 fused_ordering(148) 00:14:37.912 fused_ordering(149) 00:14:37.912 fused_ordering(150) 00:14:37.912 fused_ordering(151) 00:14:37.912 fused_ordering(152) 00:14:37.912 fused_ordering(153) 00:14:37.912 fused_ordering(154) 00:14:37.912 fused_ordering(155) 00:14:37.912 fused_ordering(156) 00:14:37.912 fused_ordering(157) 00:14:37.912 fused_ordering(158) 00:14:37.912 fused_ordering(159) 00:14:37.912 fused_ordering(160) 00:14:37.912 fused_ordering(161) 00:14:37.912 fused_ordering(162) 00:14:37.912 fused_ordering(163) 00:14:37.912 fused_ordering(164) 00:14:37.912 fused_ordering(165) 00:14:37.912 fused_ordering(166) 00:14:37.912 fused_ordering(167) 00:14:37.912 fused_ordering(168) 00:14:37.912 fused_ordering(169) 00:14:37.912 fused_ordering(170) 00:14:37.912 fused_ordering(171) 00:14:37.912 fused_ordering(172) 00:14:37.912 fused_ordering(173) 00:14:37.912 fused_ordering(174) 00:14:37.912 fused_ordering(175) 00:14:37.912 fused_ordering(176) 00:14:37.912 fused_ordering(177) 00:14:37.912 fused_ordering(178) 00:14:37.912 fused_ordering(179) 00:14:37.912 fused_ordering(180) 00:14:37.912 fused_ordering(181) 00:14:37.912 fused_ordering(182) 00:14:37.912 fused_ordering(183) 00:14:37.912 fused_ordering(184) 00:14:37.912 fused_ordering(185) 00:14:37.912 fused_ordering(186) 00:14:37.912 fused_ordering(187) 00:14:37.912 fused_ordering(188) 00:14:37.912 fused_ordering(189) 00:14:37.912 fused_ordering(190) 00:14:37.912 fused_ordering(191) 00:14:37.912 fused_ordering(192) 00:14:37.912 fused_ordering(193) 00:14:37.912 fused_ordering(194) 00:14:37.912 fused_ordering(195) 00:14:37.912 fused_ordering(196) 00:14:37.912 fused_ordering(197) 00:14:37.912 fused_ordering(198) 00:14:37.912 fused_ordering(199) 00:14:37.912 fused_ordering(200) 00:14:37.912 fused_ordering(201) 00:14:37.912 fused_ordering(202) 00:14:37.912 fused_ordering(203) 00:14:37.912 fused_ordering(204) 00:14:37.912 fused_ordering(205) 00:14:38.509 fused_ordering(206) 00:14:38.509 fused_ordering(207) 00:14:38.509 fused_ordering(208) 00:14:38.509 fused_ordering(209) 00:14:38.509 fused_ordering(210) 00:14:38.509 fused_ordering(211) 00:14:38.509 fused_ordering(212) 00:14:38.509 fused_ordering(213) 00:14:38.509 fused_ordering(214) 00:14:38.509 fused_ordering(215) 00:14:38.509 fused_ordering(216) 00:14:38.509 fused_ordering(217) 00:14:38.509 fused_ordering(218) 00:14:38.509 fused_ordering(219) 00:14:38.509 fused_ordering(220) 00:14:38.509 fused_ordering(221) 00:14:38.509 fused_ordering(222) 00:14:38.509 fused_ordering(223) 00:14:38.509 fused_ordering(224) 00:14:38.509 fused_ordering(225) 00:14:38.509 fused_ordering(226) 00:14:38.509 fused_ordering(227) 00:14:38.509 fused_ordering(228) 00:14:38.509 fused_ordering(229) 00:14:38.509 fused_ordering(230) 00:14:38.509 fused_ordering(231) 00:14:38.509 fused_ordering(232) 00:14:38.509 fused_ordering(233) 00:14:38.509 fused_ordering(234) 00:14:38.509 fused_ordering(235) 00:14:38.509 fused_ordering(236) 00:14:38.509 fused_ordering(237) 00:14:38.509 fused_ordering(238) 00:14:38.509 fused_ordering(239) 00:14:38.509 fused_ordering(240) 00:14:38.509 fused_ordering(241) 00:14:38.509 fused_ordering(242) 00:14:38.509 fused_ordering(243) 00:14:38.509 fused_ordering(244) 00:14:38.509 fused_ordering(245) 00:14:38.509 fused_ordering(246) 00:14:38.509 fused_ordering(247) 00:14:38.509 fused_ordering(248) 00:14:38.509 fused_ordering(249) 00:14:38.509 fused_ordering(250) 00:14:38.509 fused_ordering(251) 00:14:38.509 fused_ordering(252) 00:14:38.509 fused_ordering(253) 00:14:38.509 fused_ordering(254) 00:14:38.509 fused_ordering(255) 00:14:38.509 fused_ordering(256) 00:14:38.509 fused_ordering(257) 00:14:38.509 fused_ordering(258) 00:14:38.509 fused_ordering(259) 00:14:38.509 fused_ordering(260) 00:14:38.509 fused_ordering(261) 00:14:38.509 fused_ordering(262) 00:14:38.509 fused_ordering(263) 00:14:38.509 fused_ordering(264) 00:14:38.509 fused_ordering(265) 00:14:38.509 fused_ordering(266) 00:14:38.509 fused_ordering(267) 00:14:38.509 fused_ordering(268) 00:14:38.509 fused_ordering(269) 00:14:38.509 fused_ordering(270) 00:14:38.509 fused_ordering(271) 00:14:38.509 fused_ordering(272) 00:14:38.509 fused_ordering(273) 00:14:38.509 fused_ordering(274) 00:14:38.509 fused_ordering(275) 00:14:38.509 fused_ordering(276) 00:14:38.510 fused_ordering(277) 00:14:38.510 fused_ordering(278) 00:14:38.510 fused_ordering(279) 00:14:38.510 fused_ordering(280) 00:14:38.510 fused_ordering(281) 00:14:38.510 fused_ordering(282) 00:14:38.510 fused_ordering(283) 00:14:38.510 fused_ordering(284) 00:14:38.510 fused_ordering(285) 00:14:38.510 fused_ordering(286) 00:14:38.510 fused_ordering(287) 00:14:38.510 fused_ordering(288) 00:14:38.510 fused_ordering(289) 00:14:38.510 fused_ordering(290) 00:14:38.510 fused_ordering(291) 00:14:38.510 fused_ordering(292) 00:14:38.510 fused_ordering(293) 00:14:38.510 fused_ordering(294) 00:14:38.510 fused_ordering(295) 00:14:38.510 fused_ordering(296) 00:14:38.510 fused_ordering(297) 00:14:38.510 fused_ordering(298) 00:14:38.510 fused_ordering(299) 00:14:38.510 fused_ordering(300) 00:14:38.510 fused_ordering(301) 00:14:38.510 fused_ordering(302) 00:14:38.510 fused_ordering(303) 00:14:38.510 fused_ordering(304) 00:14:38.510 fused_ordering(305) 00:14:38.510 fused_ordering(306) 00:14:38.510 fused_ordering(307) 00:14:38.510 fused_ordering(308) 00:14:38.510 fused_ordering(309) 00:14:38.510 fused_ordering(310) 00:14:38.510 fused_ordering(311) 00:14:38.510 fused_ordering(312) 00:14:38.510 fused_ordering(313) 00:14:38.510 fused_ordering(314) 00:14:38.510 fused_ordering(315) 00:14:38.510 fused_ordering(316) 00:14:38.510 fused_ordering(317) 00:14:38.510 fused_ordering(318) 00:14:38.510 fused_ordering(319) 00:14:38.510 fused_ordering(320) 00:14:38.510 fused_ordering(321) 00:14:38.510 fused_ordering(322) 00:14:38.510 fused_ordering(323) 00:14:38.510 fused_ordering(324) 00:14:38.510 fused_ordering(325) 00:14:38.510 fused_ordering(326) 00:14:38.510 fused_ordering(327) 00:14:38.510 fused_ordering(328) 00:14:38.510 fused_ordering(329) 00:14:38.510 fused_ordering(330) 00:14:38.510 fused_ordering(331) 00:14:38.510 fused_ordering(332) 00:14:38.510 fused_ordering(333) 00:14:38.510 fused_ordering(334) 00:14:38.510 fused_ordering(335) 00:14:38.510 fused_ordering(336) 00:14:38.510 fused_ordering(337) 00:14:38.510 fused_ordering(338) 00:14:38.510 fused_ordering(339) 00:14:38.510 fused_ordering(340) 00:14:38.510 fused_ordering(341) 00:14:38.510 fused_ordering(342) 00:14:38.510 fused_ordering(343) 00:14:38.510 fused_ordering(344) 00:14:38.510 fused_ordering(345) 00:14:38.510 fused_ordering(346) 00:14:38.510 fused_ordering(347) 00:14:38.510 fused_ordering(348) 00:14:38.510 fused_ordering(349) 00:14:38.510 fused_ordering(350) 00:14:38.510 fused_ordering(351) 00:14:38.510 fused_ordering(352) 00:14:38.510 fused_ordering(353) 00:14:38.510 fused_ordering(354) 00:14:38.510 fused_ordering(355) 00:14:38.510 fused_ordering(356) 00:14:38.510 fused_ordering(357) 00:14:38.510 fused_ordering(358) 00:14:38.510 fused_ordering(359) 00:14:38.510 fused_ordering(360) 00:14:38.510 fused_ordering(361) 00:14:38.510 fused_ordering(362) 00:14:38.510 fused_ordering(363) 00:14:38.510 fused_ordering(364) 00:14:38.510 fused_ordering(365) 00:14:38.510 fused_ordering(366) 00:14:38.510 fused_ordering(367) 00:14:38.510 fused_ordering(368) 00:14:38.510 fused_ordering(369) 00:14:38.510 fused_ordering(370) 00:14:38.510 fused_ordering(371) 00:14:38.510 fused_ordering(372) 00:14:38.510 fused_ordering(373) 00:14:38.510 fused_ordering(374) 00:14:38.510 fused_ordering(375) 00:14:38.510 fused_ordering(376) 00:14:38.510 fused_ordering(377) 00:14:38.510 fused_ordering(378) 00:14:38.510 fused_ordering(379) 00:14:38.510 fused_ordering(380) 00:14:38.510 fused_ordering(381) 00:14:38.510 fused_ordering(382) 00:14:38.510 fused_ordering(383) 00:14:38.510 fused_ordering(384) 00:14:38.510 fused_ordering(385) 00:14:38.510 fused_ordering(386) 00:14:38.510 fused_ordering(387) 00:14:38.510 fused_ordering(388) 00:14:38.510 fused_ordering(389) 00:14:38.510 fused_ordering(390) 00:14:38.510 fused_ordering(391) 00:14:38.510 fused_ordering(392) 00:14:38.510 fused_ordering(393) 00:14:38.510 fused_ordering(394) 00:14:38.510 fused_ordering(395) 00:14:38.510 fused_ordering(396) 00:14:38.510 fused_ordering(397) 00:14:38.510 fused_ordering(398) 00:14:38.510 fused_ordering(399) 00:14:38.510 fused_ordering(400) 00:14:38.510 fused_ordering(401) 00:14:38.510 fused_ordering(402) 00:14:38.510 fused_ordering(403) 00:14:38.510 fused_ordering(404) 00:14:38.510 fused_ordering(405) 00:14:38.510 fused_ordering(406) 00:14:38.510 fused_ordering(407) 00:14:38.510 fused_ordering(408) 00:14:38.510 fused_ordering(409) 00:14:38.510 fused_ordering(410) 00:14:38.796 fused_ordering(411) 00:14:38.796 fused_ordering(412) 00:14:38.796 fused_ordering(413) 00:14:38.796 fused_ordering(414) 00:14:38.796 fused_ordering(415) 00:14:38.796 fused_ordering(416) 00:14:38.796 fused_ordering(417) 00:14:38.796 fused_ordering(418) 00:14:38.796 fused_ordering(419) 00:14:38.796 fused_ordering(420) 00:14:38.796 fused_ordering(421) 00:14:38.796 fused_ordering(422) 00:14:38.796 fused_ordering(423) 00:14:38.796 fused_ordering(424) 00:14:38.796 fused_ordering(425) 00:14:38.797 fused_ordering(426) 00:14:38.797 fused_ordering(427) 00:14:38.797 fused_ordering(428) 00:14:38.797 fused_ordering(429) 00:14:38.797 fused_ordering(430) 00:14:38.797 fused_ordering(431) 00:14:38.797 fused_ordering(432) 00:14:38.797 fused_ordering(433) 00:14:38.797 fused_ordering(434) 00:14:38.797 fused_ordering(435) 00:14:38.797 fused_ordering(436) 00:14:38.797 fused_ordering(437) 00:14:38.797 fused_ordering(438) 00:14:38.797 fused_ordering(439) 00:14:38.797 fused_ordering(440) 00:14:38.797 fused_ordering(441) 00:14:38.797 fused_ordering(442) 00:14:38.797 fused_ordering(443) 00:14:38.797 fused_ordering(444) 00:14:38.797 fused_ordering(445) 00:14:38.797 fused_ordering(446) 00:14:38.797 fused_ordering(447) 00:14:38.797 fused_ordering(448) 00:14:38.797 fused_ordering(449) 00:14:38.797 fused_ordering(450) 00:14:38.797 fused_ordering(451) 00:14:38.797 fused_ordering(452) 00:14:38.797 fused_ordering(453) 00:14:38.797 fused_ordering(454) 00:14:38.797 fused_ordering(455) 00:14:38.797 fused_ordering(456) 00:14:38.797 fused_ordering(457) 00:14:38.797 fused_ordering(458) 00:14:38.797 fused_ordering(459) 00:14:38.797 fused_ordering(460) 00:14:38.797 fused_ordering(461) 00:14:38.797 fused_ordering(462) 00:14:38.797 fused_ordering(463) 00:14:38.797 fused_ordering(464) 00:14:38.797 fused_ordering(465) 00:14:38.797 fused_ordering(466) 00:14:38.797 fused_ordering(467) 00:14:38.797 fused_ordering(468) 00:14:38.797 fused_ordering(469) 00:14:38.797 fused_ordering(470) 00:14:38.797 fused_ordering(471) 00:14:38.797 fused_ordering(472) 00:14:38.797 fused_ordering(473) 00:14:38.797 fused_ordering(474) 00:14:38.797 fused_ordering(475) 00:14:38.797 fused_ordering(476) 00:14:38.797 fused_ordering(477) 00:14:38.797 fused_ordering(478) 00:14:38.797 fused_ordering(479) 00:14:38.797 fused_ordering(480) 00:14:38.797 fused_ordering(481) 00:14:38.797 fused_ordering(482) 00:14:38.797 fused_ordering(483) 00:14:38.797 fused_ordering(484) 00:14:38.797 fused_ordering(485) 00:14:38.797 fused_ordering(486) 00:14:38.797 fused_ordering(487) 00:14:38.797 fused_ordering(488) 00:14:38.797 fused_ordering(489) 00:14:38.797 fused_ordering(490) 00:14:38.797 fused_ordering(491) 00:14:38.797 fused_ordering(492) 00:14:38.797 fused_ordering(493) 00:14:38.797 fused_ordering(494) 00:14:38.797 fused_ordering(495) 00:14:38.797 fused_ordering(496) 00:14:38.797 fused_ordering(497) 00:14:38.797 fused_ordering(498) 00:14:38.797 fused_ordering(499) 00:14:38.797 fused_ordering(500) 00:14:38.797 fused_ordering(501) 00:14:38.797 fused_ordering(502) 00:14:38.797 fused_ordering(503) 00:14:38.797 fused_ordering(504) 00:14:38.797 fused_ordering(505) 00:14:38.797 fused_ordering(506) 00:14:38.797 fused_ordering(507) 00:14:38.797 fused_ordering(508) 00:14:38.797 fused_ordering(509) 00:14:38.797 fused_ordering(510) 00:14:38.797 fused_ordering(511) 00:14:38.797 fused_ordering(512) 00:14:38.797 fused_ordering(513) 00:14:38.797 fused_ordering(514) 00:14:38.797 fused_ordering(515) 00:14:38.797 fused_ordering(516) 00:14:38.797 fused_ordering(517) 00:14:38.797 fused_ordering(518) 00:14:38.797 fused_ordering(519) 00:14:38.797 fused_ordering(520) 00:14:38.797 fused_ordering(521) 00:14:38.797 fused_ordering(522) 00:14:38.797 fused_ordering(523) 00:14:38.797 fused_ordering(524) 00:14:38.797 fused_ordering(525) 00:14:38.797 fused_ordering(526) 00:14:38.797 fused_ordering(527) 00:14:38.797 fused_ordering(528) 00:14:38.797 fused_ordering(529) 00:14:38.797 fused_ordering(530) 00:14:38.797 fused_ordering(531) 00:14:38.797 fused_ordering(532) 00:14:38.797 fused_ordering(533) 00:14:38.797 fused_ordering(534) 00:14:38.797 fused_ordering(535) 00:14:38.797 fused_ordering(536) 00:14:38.797 fused_ordering(537) 00:14:38.797 fused_ordering(538) 00:14:38.797 fused_ordering(539) 00:14:38.797 fused_ordering(540) 00:14:38.797 fused_ordering(541) 00:14:38.797 fused_ordering(542) 00:14:38.797 fused_ordering(543) 00:14:38.797 fused_ordering(544) 00:14:38.797 fused_ordering(545) 00:14:38.797 fused_ordering(546) 00:14:38.797 fused_ordering(547) 00:14:38.797 fused_ordering(548) 00:14:38.797 fused_ordering(549) 00:14:38.797 fused_ordering(550) 00:14:38.797 fused_ordering(551) 00:14:38.797 fused_ordering(552) 00:14:38.797 fused_ordering(553) 00:14:38.797 fused_ordering(554) 00:14:38.797 fused_ordering(555) 00:14:38.797 fused_ordering(556) 00:14:38.797 fused_ordering(557) 00:14:38.797 fused_ordering(558) 00:14:38.797 fused_ordering(559) 00:14:38.797 fused_ordering(560) 00:14:38.797 fused_ordering(561) 00:14:38.797 fused_ordering(562) 00:14:38.797 fused_ordering(563) 00:14:38.797 fused_ordering(564) 00:14:38.797 fused_ordering(565) 00:14:38.797 fused_ordering(566) 00:14:38.797 fused_ordering(567) 00:14:38.797 fused_ordering(568) 00:14:38.797 fused_ordering(569) 00:14:38.797 fused_ordering(570) 00:14:38.797 fused_ordering(571) 00:14:38.797 fused_ordering(572) 00:14:38.797 fused_ordering(573) 00:14:38.797 fused_ordering(574) 00:14:38.797 fused_ordering(575) 00:14:38.797 fused_ordering(576) 00:14:38.797 fused_ordering(577) 00:14:38.797 fused_ordering(578) 00:14:38.797 fused_ordering(579) 00:14:38.797 fused_ordering(580) 00:14:38.797 fused_ordering(581) 00:14:38.797 fused_ordering(582) 00:14:38.797 fused_ordering(583) 00:14:38.797 fused_ordering(584) 00:14:38.797 fused_ordering(585) 00:14:38.797 fused_ordering(586) 00:14:38.797 fused_ordering(587) 00:14:38.797 fused_ordering(588) 00:14:38.797 fused_ordering(589) 00:14:38.797 fused_ordering(590) 00:14:38.797 fused_ordering(591) 00:14:38.797 fused_ordering(592) 00:14:38.797 fused_ordering(593) 00:14:38.797 fused_ordering(594) 00:14:38.797 fused_ordering(595) 00:14:38.797 fused_ordering(596) 00:14:38.797 fused_ordering(597) 00:14:38.797 fused_ordering(598) 00:14:38.797 fused_ordering(599) 00:14:38.797 fused_ordering(600) 00:14:38.797 fused_ordering(601) 00:14:38.797 fused_ordering(602) 00:14:38.797 fused_ordering(603) 00:14:38.797 fused_ordering(604) 00:14:38.797 fused_ordering(605) 00:14:38.797 fused_ordering(606) 00:14:38.797 fused_ordering(607) 00:14:38.797 fused_ordering(608) 00:14:38.797 fused_ordering(609) 00:14:38.797 fused_ordering(610) 00:14:38.797 fused_ordering(611) 00:14:38.797 fused_ordering(612) 00:14:38.797 fused_ordering(613) 00:14:38.797 fused_ordering(614) 00:14:38.797 fused_ordering(615) 00:14:39.115 fused_ordering(616) 00:14:39.115 fused_ordering(617) 00:14:39.115 fused_ordering(618) 00:14:39.115 fused_ordering(619) 00:14:39.115 fused_ordering(620) 00:14:39.115 fused_ordering(621) 00:14:39.115 fused_ordering(622) 00:14:39.115 fused_ordering(623) 00:14:39.115 fused_ordering(624) 00:14:39.115 fused_ordering(625) 00:14:39.115 fused_ordering(626) 00:14:39.115 fused_ordering(627) 00:14:39.115 fused_ordering(628) 00:14:39.115 fused_ordering(629) 00:14:39.115 fused_ordering(630) 00:14:39.115 fused_ordering(631) 00:14:39.115 fused_ordering(632) 00:14:39.115 fused_ordering(633) 00:14:39.115 fused_ordering(634) 00:14:39.115 fused_ordering(635) 00:14:39.115 fused_ordering(636) 00:14:39.115 fused_ordering(637) 00:14:39.115 fused_ordering(638) 00:14:39.115 fused_ordering(639) 00:14:39.115 fused_ordering(640) 00:14:39.115 fused_ordering(641) 00:14:39.115 fused_ordering(642) 00:14:39.115 fused_ordering(643) 00:14:39.115 fused_ordering(644) 00:14:39.115 fused_ordering(645) 00:14:39.115 fused_ordering(646) 00:14:39.115 fused_ordering(647) 00:14:39.115 fused_ordering(648) 00:14:39.115 fused_ordering(649) 00:14:39.115 fused_ordering(650) 00:14:39.115 fused_ordering(651) 00:14:39.115 fused_ordering(652) 00:14:39.115 fused_ordering(653) 00:14:39.115 fused_ordering(654) 00:14:39.115 fused_ordering(655) 00:14:39.115 fused_ordering(656) 00:14:39.115 fused_ordering(657) 00:14:39.115 fused_ordering(658) 00:14:39.115 fused_ordering(659) 00:14:39.115 fused_ordering(660) 00:14:39.115 fused_ordering(661) 00:14:39.115 fused_ordering(662) 00:14:39.115 fused_ordering(663) 00:14:39.115 fused_ordering(664) 00:14:39.115 fused_ordering(665) 00:14:39.115 fused_ordering(666) 00:14:39.115 fused_ordering(667) 00:14:39.115 fused_ordering(668) 00:14:39.115 fused_ordering(669) 00:14:39.115 fused_ordering(670) 00:14:39.115 fused_ordering(671) 00:14:39.115 fused_ordering(672) 00:14:39.115 fused_ordering(673) 00:14:39.115 fused_ordering(674) 00:14:39.115 fused_ordering(675) 00:14:39.115 fused_ordering(676) 00:14:39.115 fused_ordering(677) 00:14:39.115 fused_ordering(678) 00:14:39.115 fused_ordering(679) 00:14:39.115 fused_ordering(680) 00:14:39.115 fused_ordering(681) 00:14:39.115 fused_ordering(682) 00:14:39.115 fused_ordering(683) 00:14:39.115 fused_ordering(684) 00:14:39.115 fused_ordering(685) 00:14:39.115 fused_ordering(686) 00:14:39.115 fused_ordering(687) 00:14:39.115 fused_ordering(688) 00:14:39.115 fused_ordering(689) 00:14:39.115 fused_ordering(690) 00:14:39.115 fused_ordering(691) 00:14:39.115 fused_ordering(692) 00:14:39.115 fused_ordering(693) 00:14:39.115 fused_ordering(694) 00:14:39.115 fused_ordering(695) 00:14:39.115 fused_ordering(696) 00:14:39.115 fused_ordering(697) 00:14:39.115 fused_ordering(698) 00:14:39.115 fused_ordering(699) 00:14:39.115 fused_ordering(700) 00:14:39.115 fused_ordering(701) 00:14:39.115 fused_ordering(702) 00:14:39.115 fused_ordering(703) 00:14:39.115 fused_ordering(704) 00:14:39.115 fused_ordering(705) 00:14:39.115 fused_ordering(706) 00:14:39.115 fused_ordering(707) 00:14:39.115 fused_ordering(708) 00:14:39.115 fused_ordering(709) 00:14:39.115 fused_ordering(710) 00:14:39.115 fused_ordering(711) 00:14:39.115 fused_ordering(712) 00:14:39.115 fused_ordering(713) 00:14:39.115 fused_ordering(714) 00:14:39.115 fused_ordering(715) 00:14:39.115 fused_ordering(716) 00:14:39.115 fused_ordering(717) 00:14:39.115 fused_ordering(718) 00:14:39.115 fused_ordering(719) 00:14:39.115 fused_ordering(720) 00:14:39.115 fused_ordering(721) 00:14:39.115 fused_ordering(722) 00:14:39.115 fused_ordering(723) 00:14:39.115 fused_ordering(724) 00:14:39.115 fused_ordering(725) 00:14:39.115 fused_ordering(726) 00:14:39.115 fused_ordering(727) 00:14:39.115 fused_ordering(728) 00:14:39.115 fused_ordering(729) 00:14:39.115 fused_ordering(730) 00:14:39.115 fused_ordering(731) 00:14:39.115 fused_ordering(732) 00:14:39.115 fused_ordering(733) 00:14:39.115 fused_ordering(734) 00:14:39.115 fused_ordering(735) 00:14:39.115 fused_ordering(736) 00:14:39.115 fused_ordering(737) 00:14:39.115 fused_ordering(738) 00:14:39.115 fused_ordering(739) 00:14:39.115 fused_ordering(740) 00:14:39.115 fused_ordering(741) 00:14:39.115 fused_ordering(742) 00:14:39.115 fused_ordering(743) 00:14:39.115 fused_ordering(744) 00:14:39.115 fused_ordering(745) 00:14:39.115 fused_ordering(746) 00:14:39.115 fused_ordering(747) 00:14:39.115 fused_ordering(748) 00:14:39.115 fused_ordering(749) 00:14:39.115 fused_ordering(750) 00:14:39.115 fused_ordering(751) 00:14:39.115 fused_ordering(752) 00:14:39.115 fused_ordering(753) 00:14:39.115 fused_ordering(754) 00:14:39.115 fused_ordering(755) 00:14:39.115 fused_ordering(756) 00:14:39.115 fused_ordering(757) 00:14:39.115 fused_ordering(758) 00:14:39.115 fused_ordering(759) 00:14:39.115 fused_ordering(760) 00:14:39.115 fused_ordering(761) 00:14:39.115 fused_ordering(762) 00:14:39.115 fused_ordering(763) 00:14:39.115 fused_ordering(764) 00:14:39.115 fused_ordering(765) 00:14:39.115 fused_ordering(766) 00:14:39.115 fused_ordering(767) 00:14:39.115 fused_ordering(768) 00:14:39.115 fused_ordering(769) 00:14:39.115 fused_ordering(770) 00:14:39.115 fused_ordering(771) 00:14:39.115 fused_ordering(772) 00:14:39.115 fused_ordering(773) 00:14:39.115 fused_ordering(774) 00:14:39.115 fused_ordering(775) 00:14:39.115 fused_ordering(776) 00:14:39.115 fused_ordering(777) 00:14:39.115 fused_ordering(778) 00:14:39.115 fused_ordering(779) 00:14:39.115 fused_ordering(780) 00:14:39.115 fused_ordering(781) 00:14:39.115 fused_ordering(782) 00:14:39.115 fused_ordering(783) 00:14:39.115 fused_ordering(784) 00:14:39.115 fused_ordering(785) 00:14:39.115 fused_ordering(786) 00:14:39.115 fused_ordering(787) 00:14:39.115 fused_ordering(788) 00:14:39.115 fused_ordering(789) 00:14:39.115 fused_ordering(790) 00:14:39.115 fused_ordering(791) 00:14:39.115 fused_ordering(792) 00:14:39.115 fused_ordering(793) 00:14:39.115 fused_ordering(794) 00:14:39.115 fused_ordering(795) 00:14:39.115 fused_ordering(796) 00:14:39.115 fused_ordering(797) 00:14:39.115 fused_ordering(798) 00:14:39.115 fused_ordering(799) 00:14:39.115 fused_ordering(800) 00:14:39.115 fused_ordering(801) 00:14:39.115 fused_ordering(802) 00:14:39.115 fused_ordering(803) 00:14:39.115 fused_ordering(804) 00:14:39.115 fused_ordering(805) 00:14:39.115 fused_ordering(806) 00:14:39.115 fused_ordering(807) 00:14:39.115 fused_ordering(808) 00:14:39.115 fused_ordering(809) 00:14:39.115 fused_ordering(810) 00:14:39.115 fused_ordering(811) 00:14:39.115 fused_ordering(812) 00:14:39.115 fused_ordering(813) 00:14:39.115 fused_ordering(814) 00:14:39.115 fused_ordering(815) 00:14:39.115 fused_ordering(816) 00:14:39.115 fused_ordering(817) 00:14:39.115 fused_ordering(818) 00:14:39.115 fused_ordering(819) 00:14:39.115 fused_ordering(820) 00:14:39.749 fused_ordering(821) 00:14:39.749 fused_ordering(822) 00:14:39.749 fused_ordering(823) 00:14:39.749 fused_ordering(824) 00:14:39.749 fused_ordering(825) 00:14:39.749 fused_ordering(826) 00:14:39.749 fused_ordering(827) 00:14:39.749 fused_ordering(828) 00:14:39.749 fused_ordering(829) 00:14:39.749 fused_ordering(830) 00:14:39.749 fused_ordering(831) 00:14:39.749 fused_ordering(832) 00:14:39.749 fused_ordering(833) 00:14:39.749 fused_ordering(834) 00:14:39.749 fused_ordering(835) 00:14:39.749 fused_ordering(836) 00:14:39.749 fused_ordering(837) 00:14:39.749 fused_ordering(838) 00:14:39.749 fused_ordering(839) 00:14:39.749 fused_ordering(840) 00:14:39.749 fused_ordering(841) 00:14:39.749 fused_ordering(842) 00:14:39.749 fused_ordering(843) 00:14:39.749 fused_ordering(844) 00:14:39.749 fused_ordering(845) 00:14:39.749 fused_ordering(846) 00:14:39.749 fused_ordering(847) 00:14:39.749 fused_ordering(848) 00:14:39.749 fused_ordering(849) 00:14:39.749 fused_ordering(850) 00:14:39.749 fused_ordering(851) 00:14:39.749 fused_ordering(852) 00:14:39.749 fused_ordering(853) 00:14:39.749 fused_ordering(854) 00:14:39.749 fused_ordering(855) 00:14:39.749 fused_ordering(856) 00:14:39.749 fused_ordering(857) 00:14:39.749 fused_ordering(858) 00:14:39.749 fused_ordering(859) 00:14:39.749 fused_ordering(860) 00:14:39.749 fused_ordering(861) 00:14:39.749 fused_ordering(862) 00:14:39.749 fused_ordering(863) 00:14:39.749 fused_ordering(864) 00:14:39.749 fused_ordering(865) 00:14:39.749 fused_ordering(866) 00:14:39.749 fused_ordering(867) 00:14:39.749 fused_ordering(868) 00:14:39.749 fused_ordering(869) 00:14:39.749 fused_ordering(870) 00:14:39.749 fused_ordering(871) 00:14:39.749 fused_ordering(872) 00:14:39.749 fused_ordering(873) 00:14:39.749 fused_ordering(874) 00:14:39.749 fused_ordering(875) 00:14:39.749 fused_ordering(876) 00:14:39.749 fused_ordering(877) 00:14:39.749 fused_ordering(878) 00:14:39.749 fused_ordering(879) 00:14:39.749 fused_ordering(880) 00:14:39.749 fused_ordering(881) 00:14:39.749 fused_ordering(882) 00:14:39.749 fused_ordering(883) 00:14:39.749 fused_ordering(884) 00:14:39.749 fused_ordering(885) 00:14:39.749 fused_ordering(886) 00:14:39.749 fused_ordering(887) 00:14:39.749 fused_ordering(888) 00:14:39.749 fused_ordering(889) 00:14:39.749 fused_ordering(890) 00:14:39.749 fused_ordering(891) 00:14:39.749 fused_ordering(892) 00:14:39.749 fused_ordering(893) 00:14:39.749 fused_ordering(894) 00:14:39.749 fused_ordering(895) 00:14:39.749 fused_ordering(896) 00:14:39.749 fused_ordering(897) 00:14:39.749 fused_ordering(898) 00:14:39.749 fused_ordering(899) 00:14:39.749 fused_ordering(900) 00:14:39.749 fused_ordering(901) 00:14:39.749 fused_ordering(902) 00:14:39.749 fused_ordering(903) 00:14:39.749 fused_ordering(904) 00:14:39.749 fused_ordering(905) 00:14:39.749 fused_ordering(906) 00:14:39.749 fused_ordering(907) 00:14:39.749 fused_ordering(908) 00:14:39.749 fused_ordering(909) 00:14:39.749 fused_ordering(910) 00:14:39.749 fused_ordering(911) 00:14:39.749 fused_ordering(912) 00:14:39.749 fused_ordering(913) 00:14:39.749 fused_ordering(914) 00:14:39.749 fused_ordering(915) 00:14:39.749 fused_ordering(916) 00:14:39.749 fused_ordering(917) 00:14:39.749 fused_ordering(918) 00:14:39.749 fused_ordering(919) 00:14:39.749 fused_ordering(920) 00:14:39.749 fused_ordering(921) 00:14:39.749 fused_ordering(922) 00:14:39.749 fused_ordering(923) 00:14:39.749 fused_ordering(924) 00:14:39.749 fused_ordering(925) 00:14:39.749 fused_ordering(926) 00:14:39.749 fused_ordering(927) 00:14:39.749 fused_ordering(928) 00:14:39.749 fused_ordering(929) 00:14:39.749 fused_ordering(930) 00:14:39.749 fused_ordering(931) 00:14:39.749 fused_ordering(932) 00:14:39.749 fused_ordering(933) 00:14:39.749 fused_ordering(934) 00:14:39.749 fused_ordering(935) 00:14:39.749 fused_ordering(936) 00:14:39.749 fused_ordering(937) 00:14:39.749 fused_ordering(938) 00:14:39.749 fused_ordering(939) 00:14:39.749 fused_ordering(940) 00:14:39.749 fused_ordering(941) 00:14:39.749 fused_ordering(942) 00:14:39.749 fused_ordering(943) 00:14:39.749 fused_ordering(944) 00:14:39.749 fused_ordering(945) 00:14:39.749 fused_ordering(946) 00:14:39.749 fused_ordering(947) 00:14:39.749 fused_ordering(948) 00:14:39.749 fused_ordering(949) 00:14:39.749 fused_ordering(950) 00:14:39.749 fused_ordering(951) 00:14:39.749 fused_ordering(952) 00:14:39.749 fused_ordering(953) 00:14:39.749 fused_ordering(954) 00:14:39.749 fused_ordering(955) 00:14:39.749 fused_ordering(956) 00:14:39.749 fused_ordering(957) 00:14:39.749 fused_ordering(958) 00:14:39.749 fused_ordering(959) 00:14:39.749 fused_ordering(960) 00:14:39.749 fused_ordering(961) 00:14:39.749 fused_ordering(962) 00:14:39.749 fused_ordering(963) 00:14:39.749 fused_ordering(964) 00:14:39.749 fused_ordering(965) 00:14:39.749 fused_ordering(966) 00:14:39.749 fused_ordering(967) 00:14:39.749 fused_ordering(968) 00:14:39.749 fused_ordering(969) 00:14:39.749 fused_ordering(970) 00:14:39.749 fused_ordering(971) 00:14:39.749 fused_ordering(972) 00:14:39.749 fused_ordering(973) 00:14:39.749 fused_ordering(974) 00:14:39.749 fused_ordering(975) 00:14:39.749 fused_ordering(976) 00:14:39.749 fused_ordering(977) 00:14:39.749 fused_ordering(978) 00:14:39.749 fused_ordering(979) 00:14:39.749 fused_ordering(980) 00:14:39.749 fused_ordering(981) 00:14:39.749 fused_ordering(982) 00:14:39.749 fused_ordering(983) 00:14:39.749 fused_ordering(984) 00:14:39.749 fused_ordering(985) 00:14:39.749 fused_ordering(986) 00:14:39.749 fused_ordering(987) 00:14:39.749 fused_ordering(988) 00:14:39.749 fused_ordering(989) 00:14:39.749 fused_ordering(990) 00:14:39.749 fused_ordering(991) 00:14:39.749 fused_ordering(992) 00:14:39.749 fused_ordering(993) 00:14:39.749 fused_ordering(994) 00:14:39.749 fused_ordering(995) 00:14:39.749 fused_ordering(996) 00:14:39.749 fused_ordering(997) 00:14:39.749 fused_ordering(998) 00:14:39.749 fused_ordering(999) 00:14:39.749 fused_ordering(1000) 00:14:39.749 fused_ordering(1001) 00:14:39.749 fused_ordering(1002) 00:14:39.749 fused_ordering(1003) 00:14:39.749 fused_ordering(1004) 00:14:39.749 fused_ordering(1005) 00:14:39.749 fused_ordering(1006) 00:14:39.749 fused_ordering(1007) 00:14:39.749 fused_ordering(1008) 00:14:39.749 fused_ordering(1009) 00:14:39.749 fused_ordering(1010) 00:14:39.750 fused_ordering(1011) 00:14:39.750 fused_ordering(1012) 00:14:39.750 fused_ordering(1013) 00:14:39.750 fused_ordering(1014) 00:14:39.750 fused_ordering(1015) 00:14:39.750 fused_ordering(1016) 00:14:39.750 fused_ordering(1017) 00:14:39.750 fused_ordering(1018) 00:14:39.750 fused_ordering(1019) 00:14:39.750 fused_ordering(1020) 00:14:39.750 fused_ordering(1021) 00:14:39.750 fused_ordering(1022) 00:14:39.750 fused_ordering(1023) 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.750 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.750 rmmod nvme_tcp 00:14:39.750 rmmod nvme_fabrics 00:14:39.750 rmmod nvme_keyring 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 863519 ']' 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 863519 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 863519 ']' 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 863519 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 863519 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 863519' 00:14:40.010 killing process with pid 863519 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 863519 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 863519 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.010 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.011 13:19:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.558 00:14:42.558 real 0m14.168s 00:14:42.558 user 0m7.155s 00:14:42.558 sys 0m7.677s 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:42.558 ************************************ 00:14:42.558 END TEST nvmf_fused_ordering 00:14:42.558 ************************************ 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.558 ************************************ 00:14:42.558 START TEST nvmf_ns_masking 00:14:42.558 ************************************ 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:42.558 * Looking for test storage... 00:14:42.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:42.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.558 --rc genhtml_branch_coverage=1 00:14:42.558 --rc genhtml_function_coverage=1 00:14:42.558 --rc genhtml_legend=1 00:14:42.558 --rc geninfo_all_blocks=1 00:14:42.558 --rc geninfo_unexecuted_blocks=1 00:14:42.558 00:14:42.558 ' 00:14:42.558 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:42.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.558 --rc genhtml_branch_coverage=1 00:14:42.558 --rc genhtml_function_coverage=1 00:14:42.558 --rc genhtml_legend=1 00:14:42.558 --rc geninfo_all_blocks=1 00:14:42.558 --rc geninfo_unexecuted_blocks=1 00:14:42.558 00:14:42.558 ' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.559 --rc genhtml_branch_coverage=1 00:14:42.559 --rc genhtml_function_coverage=1 00:14:42.559 --rc genhtml_legend=1 00:14:42.559 --rc geninfo_all_blocks=1 00:14:42.559 --rc geninfo_unexecuted_blocks=1 00:14:42.559 00:14:42.559 ' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:42.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.559 --rc genhtml_branch_coverage=1 00:14:42.559 --rc genhtml_function_coverage=1 00:14:42.559 --rc genhtml_legend=1 00:14:42.559 --rc geninfo_all_blocks=1 00:14:42.559 --rc geninfo_unexecuted_blocks=1 00:14:42.559 00:14:42.559 ' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=587ca1b1-f270-45a2-a927-f8d7093ce7dc 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8c9a1c6b-23fa-4f42-b661-8fb24218c9db 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c269dcf2-1c76-4658-afa0-0a4607de0f1a 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:42.559 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.560 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.560 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.560 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:42.560 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:42.560 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:42.560 13:19:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:50.696 13:19:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:50.696 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:50.696 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:50.696 Found net devices under 0000:31:00.0: cvl_0_0 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:50.696 Found net devices under 0000:31:00.1: cvl_0_1 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:50.696 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:50.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:14:50.957 00:14:50.957 --- 10.0.0.2 ping statistics --- 00:14:50.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.957 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:14:50.957 00:14:50.957 --- 10.0.0.1 ping statistics --- 00:14:50.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.957 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=868996 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 868996 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 868996 ']' 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:50.957 13:19:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:50.957 [2024-12-05 13:19:13.437471] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:14:50.957 [2024-12-05 13:19:13.437537] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.219 [2024-12-05 13:19:13.528118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.219 [2024-12-05 13:19:13.568099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.219 [2024-12-05 13:19:13.568136] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.219 [2024-12-05 13:19:13.568144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:51.219 [2024-12-05 13:19:13.568151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:51.219 [2024-12-05 13:19:13.568157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.219 [2024-12-05 13:19:13.568742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.789 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.789 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:51.789 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:51.789 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:51.789 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:51.789 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:51.789 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:52.050 [2024-12-05 13:19:14.411483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.050 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:52.050 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:52.050 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:52.050 Malloc1 00:14:52.050 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:52.311 Malloc2 00:14:52.311 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:52.571 13:19:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:52.571 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.831 [2024-12-05 13:19:15.252918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.831 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:52.831 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c269dcf2-1c76-4658-afa0-0a4607de0f1a -a 10.0.0.2 -s 4420 -i 4 00:14:53.090 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.090 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:53.090 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.090 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:53.090 13:19:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:55.000 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:55.001 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:55.001 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:55.001 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.001 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.001 [ 0]:0x1 00:14:55.001 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.001 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=13641ed244394c8b9ffe232dde1e55f4 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 13641ed244394c8b9ffe232dde1e55f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:55.261 [ 0]:0x1 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:55.261 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=13641ed244394c8b9ffe232dde1e55f4 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 13641ed244394c8b9ffe232dde1e55f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:55.522 [ 1]:0x2 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f3adebcc10446319a94e3db796ee303 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f3adebcc10446319a94e3db796ee303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:55.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.522 13:19:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.785 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c269dcf2-1c76-4658-afa0-0a4607de0f1a -a 10.0.0.2 -s 4420 -i 4 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:14:56.045 13:19:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:58.586 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.587 [ 0]:0x2 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f3adebcc10446319a94e3db796ee303 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f3adebcc10446319a94e3db796ee303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.587 13:19:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.587 [ 0]:0x1 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=13641ed244394c8b9ffe232dde1e55f4 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 13641ed244394c8b9ffe232dde1e55f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:58.587 [ 1]:0x2 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:58.587 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f3adebcc10446319a94e3db796ee303 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f3adebcc10446319a94e3db796ee303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:58.848 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:59.108 [ 0]:0x2 00:14:59.108 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.108 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.108 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f3adebcc10446319a94e3db796ee303 00:14:59.108 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f3adebcc10446319a94e3db796ee303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.108 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:59.109 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.109 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c269dcf2-1c76-4658-afa0-0a4607de0f1a -a 10.0.0.2 -s 4420 -i 4 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:59.368 13:19:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:01.277 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:01.277 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:01.277 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.537 [ 0]:0x1 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=13641ed244394c8b9ffe232dde1e55f4 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 13641ed244394c8b9ffe232dde1e55f4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:01.537 [ 1]:0x2 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.537 13:19:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.537 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f3adebcc10446319a94e3db796ee303 00:15:01.537 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f3adebcc10446319a94e3db796ee303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.537 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:01.797 [ 0]:0x2 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:01.797 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f3adebcc10446319a94e3db796ee303 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f3adebcc10446319a94e3db796ee303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:01.798 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.058 [2024-12-05 13:19:24.499525] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:02.058 request: 00:15:02.058 { 00:15:02.058 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.058 "nsid": 2, 00:15:02.058 "host": "nqn.2016-06.io.spdk:host1", 00:15:02.058 "method": "nvmf_ns_remove_host", 00:15:02.058 "req_id": 1 00:15:02.058 } 00:15:02.058 Got JSON-RPC error response 00:15:02.058 response: 00:15:02.058 { 00:15:02.058 "code": -32602, 00:15:02.058 "message": "Invalid parameters" 00:15:02.058 } 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:02.058 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:02.318 [ 0]:0x2 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=4f3adebcc10446319a94e3db796ee303 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 4f3adebcc10446319a94e3db796ee303 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=871238 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 871238 /var/tmp/host.sock 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 871238 ']' 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:02.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.318 13:19:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:02.578 [2024-12-05 13:19:24.897947] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:15:02.578 [2024-12-05 13:19:24.897998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid871238 ] 00:15:02.578 [2024-12-05 13:19:24.992937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.578 [2024-12-05 13:19:25.029543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.147 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.147 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:03.147 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.407 13:19:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:03.667 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 587ca1b1-f270-45a2-a927-f8d7093ce7dc 00:15:03.667 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:03.667 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 587CA1B1F27045A2A927F8D7093CE7DC -i 00:15:03.667 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8c9a1c6b-23fa-4f42-b661-8fb24218c9db 00:15:03.667 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:03.667 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8C9A1C6B23FA4F42B6618FB24218C9DB -i 00:15:03.927 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:04.188 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:04.188 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:04.188 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:04.450 nvme0n1 00:15:04.450 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:04.450 13:19:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:05.021 nvme1n2 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:05.021 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:05.281 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 587ca1b1-f270-45a2-a927-f8d7093ce7dc == \5\8\7\c\a\1\b\1\-\f\2\7\0\-\4\5\a\2\-\a\9\2\7\-\f\8\d\7\0\9\3\c\e\7\d\c ]] 00:15:05.281 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:05.281 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:05.281 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:05.541 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8c9a1c6b-23fa-4f42-b661-8fb24218c9db == \8\c\9\a\1\c\6\b\-\2\3\f\a\-\4\f\4\2\-\b\6\6\1\-\8\f\b\2\4\2\1\8\c\9\d\b ]] 00:15:05.541 13:19:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.541 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 587ca1b1-f270-45a2-a927-f8d7093ce7dc 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 587CA1B1F27045A2A927F8D7093CE7DC 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 587CA1B1F27045A2A927F8D7093CE7DC 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:05.802 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 587CA1B1F27045A2A927F8D7093CE7DC 00:15:06.063 [2024-12-05 13:19:28.390227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:06.063 [2024-12-05 13:19:28.390259] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:06.063 [2024-12-05 13:19:28.390269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:06.063 request: 00:15:06.063 { 00:15:06.063 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:06.063 "namespace": { 00:15:06.063 "bdev_name": "invalid", 00:15:06.063 "nsid": 1, 00:15:06.063 "nguid": "587CA1B1F27045A2A927F8D7093CE7DC", 00:15:06.063 "no_auto_visible": false, 00:15:06.063 "hide_metadata": false 00:15:06.063 }, 00:15:06.063 "method": "nvmf_subsystem_add_ns", 00:15:06.063 "req_id": 1 00:15:06.063 } 00:15:06.063 Got JSON-RPC error response 00:15:06.063 response: 00:15:06.063 { 00:15:06.063 "code": -32602, 00:15:06.063 "message": "Invalid parameters" 00:15:06.063 } 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 587ca1b1-f270-45a2-a927-f8d7093ce7dc 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 587CA1B1F27045A2A927F8D7093CE7DC -i 00:15:06.063 13:19:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:08.607 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:08.607 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 871238 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 871238 ']' 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 871238 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 871238 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 871238' 00:15:08.608 killing process with pid 871238 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 871238 00:15:08.608 13:19:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 871238 00:15:08.608 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.868 rmmod nvme_tcp 00:15:08.868 rmmod nvme_fabrics 00:15:08.868 rmmod nvme_keyring 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 868996 ']' 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 868996 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 868996 ']' 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 868996 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 868996 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 868996' 00:15:08.868 killing process with pid 868996 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 868996 00:15:08.868 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 868996 00:15:09.129 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:09.129 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.130 13:19:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:11.044 00:15:11.044 real 0m28.902s 00:15:11.044 user 0m31.599s 00:15:11.044 sys 0m8.831s 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:11.044 ************************************ 00:15:11.044 END TEST nvmf_ns_masking 00:15:11.044 ************************************ 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.044 13:19:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.305 ************************************ 00:15:11.305 START TEST nvmf_nvme_cli 00:15:11.305 ************************************ 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:11.305 * Looking for test storage... 00:15:11.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:11.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.305 --rc genhtml_branch_coverage=1 00:15:11.305 --rc genhtml_function_coverage=1 00:15:11.305 --rc genhtml_legend=1 00:15:11.305 --rc geninfo_all_blocks=1 00:15:11.305 --rc geninfo_unexecuted_blocks=1 00:15:11.305 00:15:11.305 ' 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:11.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.305 --rc genhtml_branch_coverage=1 00:15:11.305 --rc genhtml_function_coverage=1 00:15:11.305 --rc genhtml_legend=1 00:15:11.305 --rc geninfo_all_blocks=1 00:15:11.305 --rc geninfo_unexecuted_blocks=1 00:15:11.305 00:15:11.305 ' 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:11.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.305 --rc genhtml_branch_coverage=1 00:15:11.305 --rc genhtml_function_coverage=1 00:15:11.305 --rc genhtml_legend=1 00:15:11.305 --rc geninfo_all_blocks=1 00:15:11.305 --rc geninfo_unexecuted_blocks=1 00:15:11.305 00:15:11.305 ' 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:11.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.305 --rc genhtml_branch_coverage=1 00:15:11.305 --rc genhtml_function_coverage=1 00:15:11.305 --rc genhtml_legend=1 00:15:11.305 --rc geninfo_all_blocks=1 00:15:11.305 --rc geninfo_unexecuted_blocks=1 00:15:11.305 00:15:11.305 ' 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.305 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.567 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:11.568 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:11.568 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:11.568 13:19:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:19.708 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.708 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:19.709 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:19.709 Found net devices under 0000:31:00.0: cvl_0_0 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:19.709 Found net devices under 0000:31:00.1: cvl_0_1 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:19.709 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.969 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:19.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.632 ms 00:15:19.970 00:15:19.970 --- 10.0.0.2 ping statistics --- 00:15:19.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.970 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:15:19.970 00:15:19.970 --- 10.0.0.1 ping statistics --- 00:15:19.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.970 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=877312 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 877312 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 877312 ']' 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:19.970 13:19:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:19.970 [2024-12-05 13:19:42.435468] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:15:19.970 [2024-12-05 13:19:42.435521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.970 [2024-12-05 13:19:42.524430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.232 [2024-12-05 13:19:42.563829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.232 [2024-12-05 13:19:42.563869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.232 [2024-12-05 13:19:42.563877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.232 [2024-12-05 13:19:42.563884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.232 [2024-12-05 13:19:42.563889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.232 [2024-12-05 13:19:42.565473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.232 [2024-12-05 13:19:42.565587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.232 [2024-12-05 13:19:42.565746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.232 [2024-12-05 13:19:42.565747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.805 [2024-12-05 13:19:43.284915] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.805 Malloc0 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.805 Malloc1 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.805 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.067 [2024-12-05 13:19:43.382941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:15:21.067 00:15:21.067 Discovery Log Number of Records 2, Generation counter 2 00:15:21.067 =====Discovery Log Entry 0====== 00:15:21.067 trtype: tcp 00:15:21.067 adrfam: ipv4 00:15:21.067 subtype: current discovery subsystem 00:15:21.067 treq: not required 00:15:21.067 portid: 0 00:15:21.067 trsvcid: 4420 00:15:21.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:21.067 traddr: 10.0.0.2 00:15:21.067 eflags: explicit discovery connections, duplicate discovery information 00:15:21.067 sectype: none 00:15:21.067 =====Discovery Log Entry 1====== 00:15:21.067 trtype: tcp 00:15:21.067 adrfam: ipv4 00:15:21.067 subtype: nvme subsystem 00:15:21.067 treq: not required 00:15:21.067 portid: 0 00:15:21.067 trsvcid: 4420 00:15:21.067 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:21.067 traddr: 10.0.0.2 00:15:21.067 eflags: none 00:15:21.067 sectype: none 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:21.067 13:19:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:22.982 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:22.982 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:22.982 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:22.982 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:22.982 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:22.982 13:19:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:24.893 /dev/nvme0n2 ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:24.893 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:25.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.153 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:25.153 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:25.153 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:25.153 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.153 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:25.153 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:25.154 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:25.154 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:25.154 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:25.154 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.154 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:25.414 rmmod nvme_tcp 00:15:25.414 rmmod nvme_fabrics 00:15:25.414 rmmod nvme_keyring 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 877312 ']' 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 877312 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 877312 ']' 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 877312 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 877312 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 877312' 00:15:25.414 killing process with pid 877312 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 877312 00:15:25.414 13:19:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 877312 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.674 13:19:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:27.585 00:15:27.585 real 0m16.441s 00:15:27.585 user 0m24.194s 00:15:27.585 sys 0m7.062s 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:27.585 ************************************ 00:15:27.585 END TEST nvmf_nvme_cli 00:15:27.585 ************************************ 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.585 13:19:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:27.847 ************************************ 00:15:27.847 START TEST nvmf_vfio_user 00:15:27.847 ************************************ 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:27.847 * Looking for test storage... 00:15:27.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.847 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:27.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.848 --rc genhtml_branch_coverage=1 00:15:27.848 --rc genhtml_function_coverage=1 00:15:27.848 --rc genhtml_legend=1 00:15:27.848 --rc geninfo_all_blocks=1 00:15:27.848 --rc geninfo_unexecuted_blocks=1 00:15:27.848 00:15:27.848 ' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:27.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.848 --rc genhtml_branch_coverage=1 00:15:27.848 --rc genhtml_function_coverage=1 00:15:27.848 --rc genhtml_legend=1 00:15:27.848 --rc geninfo_all_blocks=1 00:15:27.848 --rc geninfo_unexecuted_blocks=1 00:15:27.848 00:15:27.848 ' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:27.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.848 --rc genhtml_branch_coverage=1 00:15:27.848 --rc genhtml_function_coverage=1 00:15:27.848 --rc genhtml_legend=1 00:15:27.848 --rc geninfo_all_blocks=1 00:15:27.848 --rc geninfo_unexecuted_blocks=1 00:15:27.848 00:15:27.848 ' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:27.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.848 --rc genhtml_branch_coverage=1 00:15:27.848 --rc genhtml_function_coverage=1 00:15:27.848 --rc genhtml_legend=1 00:15:27.848 --rc geninfo_all_blocks=1 00:15:27.848 --rc geninfo_unexecuted_blocks=1 00:15:27.848 00:15:27.848 ' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:27.848 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=879122 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 879122' 00:15:28.109 Process pid: 879122 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 879122 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 879122 ']' 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.109 13:19:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:28.109 [2024-12-05 13:19:50.482056] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:15:28.109 [2024-12-05 13:19:50.482130] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.109 [2024-12-05 13:19:50.568750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.109 [2024-12-05 13:19:50.610757] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.109 [2024-12-05 13:19:50.610797] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.109 [2024-12-05 13:19:50.610805] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.109 [2024-12-05 13:19:50.610812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.109 [2024-12-05 13:19:50.610819] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.109 [2024-12-05 13:19:50.612588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.109 [2024-12-05 13:19:50.612710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.109 [2024-12-05 13:19:50.612838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.109 [2024-12-05 13:19:50.612838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.063 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.063 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:29.063 13:19:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:30.004 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:30.004 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:30.004 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:30.004 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.004 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:30.004 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:30.265 Malloc1 00:15:30.265 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:30.526 13:19:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:30.526 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:30.788 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:30.788 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:30.788 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:31.049 Malloc2 00:15:31.049 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:31.049 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:31.311 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:31.574 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:31.574 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:31.574 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:31.574 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:31.574 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:31.574 13:19:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:31.574 [2024-12-05 13:19:53.990840] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:15:31.574 [2024-12-05 13:19:53.990878] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879818 ] 00:15:31.574 [2024-12-05 13:19:54.043977] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:31.574 [2024-12-05 13:19:54.053142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:31.574 [2024-12-05 13:19:54.053164] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f162e8a1000 00:15:31.574 [2024-12-05 13:19:54.054138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.055138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.056138] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.057144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.058155] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.059161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.060170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.061166] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:31.574 [2024-12-05 13:19:54.062181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:31.574 [2024-12-05 13:19:54.062190] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f162e896000 00:15:31.574 [2024-12-05 13:19:54.063516] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:31.574 [2024-12-05 13:19:54.084428] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:31.574 [2024-12-05 13:19:54.084464] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:31.574 [2024-12-05 13:19:54.087323] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:31.574 [2024-12-05 13:19:54.087368] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:31.574 [2024-12-05 13:19:54.087458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:31.574 [2024-12-05 13:19:54.087476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:31.574 [2024-12-05 13:19:54.087482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:31.574 [2024-12-05 13:19:54.088321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:31.574 [2024-12-05 13:19:54.088333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:31.574 [2024-12-05 13:19:54.088340] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:31.574 [2024-12-05 13:19:54.089328] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:31.574 [2024-12-05 13:19:54.089338] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:31.574 [2024-12-05 13:19:54.089346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:31.574 [2024-12-05 13:19:54.090334] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:31.574 [2024-12-05 13:19:54.090343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:31.574 [2024-12-05 13:19:54.091336] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:31.574 [2024-12-05 13:19:54.091345] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:31.574 [2024-12-05 13:19:54.091350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:31.574 [2024-12-05 13:19:54.091357] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:31.574 [2024-12-05 13:19:54.091465] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:31.574 [2024-12-05 13:19:54.091470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:31.574 [2024-12-05 13:19:54.091476] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:31.574 [2024-12-05 13:19:54.092346] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:31.574 [2024-12-05 13:19:54.093350] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:31.574 [2024-12-05 13:19:54.094355] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:31.574 [2024-12-05 13:19:54.095359] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:31.574 [2024-12-05 13:19:54.095424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:31.574 [2024-12-05 13:19:54.096368] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:31.574 [2024-12-05 13:19:54.096376] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:31.574 [2024-12-05 13:19:54.096381] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:31.574 [2024-12-05 13:19:54.096403] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:31.575 [2024-12-05 13:19:54.096410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096430] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.575 [2024-12-05 13:19:54.096435] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.575 [2024-12-05 13:19:54.096439] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:31.575 [2024-12-05 13:19:54.096453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.096490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.096500] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:31.575 [2024-12-05 13:19:54.096505] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:31.575 [2024-12-05 13:19:54.096510] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:31.575 [2024-12-05 13:19:54.096517] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:31.575 [2024-12-05 13:19:54.096522] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:31.575 [2024-12-05 13:19:54.096527] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:31.575 [2024-12-05 13:19:54.096532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096540] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.096569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.575 [2024-12-05 13:19:54.096577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.575 [2024-12-05 13:19:54.096586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.575 [2024-12-05 13:19:54.096595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:31.575 [2024-12-05 13:19:54.096599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096609] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096618] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.096626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.096632] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:31.575 [2024-12-05 13:19:54.096637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096652] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096661] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.096668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.096731] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096739] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096746] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:31.575 [2024-12-05 13:19:54.096751] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:31.575 [2024-12-05 13:19:54.096756] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:31.575 [2024-12-05 13:19:54.096763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.096775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.096786] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:31.575 [2024-12-05 13:19:54.096799] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096814] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.575 [2024-12-05 13:19:54.096819] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.575 [2024-12-05 13:19:54.096822] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:31.575 [2024-12-05 13:19:54.096828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.096849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.096860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096888] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:31.575 [2024-12-05 13:19:54.096892] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.575 [2024-12-05 13:19:54.096896] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:31.575 [2024-12-05 13:19:54.096902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.096911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.096922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096929] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096950] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096960] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:31.575 [2024-12-05 13:19:54.096966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:31.575 [2024-12-05 13:19:54.096972] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:31.575 [2024-12-05 13:19:54.096990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.097000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.097012] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.097020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.097031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.097038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.097050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.097057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:31.575 [2024-12-05 13:19:54.097070] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:31.575 [2024-12-05 13:19:54.097075] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:31.575 [2024-12-05 13:19:54.097078] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:31.575 [2024-12-05 13:19:54.097082] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:31.575 [2024-12-05 13:19:54.097085] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:31.575 [2024-12-05 13:19:54.097092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:31.575 [2024-12-05 13:19:54.097100] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:31.575 [2024-12-05 13:19:54.097104] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:31.575 [2024-12-05 13:19:54.097107] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:31.575 [2024-12-05 13:19:54.097113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:31.575 [2024-12-05 13:19:54.097121] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:31.575 [2024-12-05 13:19:54.097125] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:31.576 [2024-12-05 13:19:54.097129] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:31.576 [2024-12-05 13:19:54.097134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:31.576 [2024-12-05 13:19:54.097142] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:31.576 [2024-12-05 13:19:54.097147] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:31.576 [2024-12-05 13:19:54.097150] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:31.576 [2024-12-05 13:19:54.097156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:31.576 [2024-12-05 13:19:54.097163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:31.576 [2024-12-05 13:19:54.097179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:31.576 [2024-12-05 13:19:54.097189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:31.576 [2024-12-05 13:19:54.097197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:31.576 ===================================================== 00:15:31.576 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.576 ===================================================== 00:15:31.576 Controller Capabilities/Features 00:15:31.576 ================================ 00:15:31.576 Vendor ID: 4e58 00:15:31.576 Subsystem Vendor ID: 4e58 00:15:31.576 Serial Number: SPDK1 00:15:31.576 Model Number: SPDK bdev Controller 00:15:31.576 Firmware Version: 25.01 00:15:31.576 Recommended Arb Burst: 6 00:15:31.576 IEEE OUI Identifier: 8d 6b 50 00:15:31.576 Multi-path I/O 00:15:31.576 May have multiple subsystem ports: Yes 00:15:31.576 May have multiple controllers: Yes 00:15:31.576 Associated with SR-IOV VF: No 00:15:31.576 Max Data Transfer Size: 131072 00:15:31.576 Max Number of Namespaces: 32 00:15:31.576 Max Number of I/O Queues: 127 00:15:31.576 NVMe Specification Version (VS): 1.3 00:15:31.576 NVMe Specification Version (Identify): 1.3 00:15:31.576 Maximum Queue Entries: 256 00:15:31.576 Contiguous Queues Required: Yes 00:15:31.576 Arbitration Mechanisms Supported 00:15:31.576 Weighted Round Robin: Not Supported 00:15:31.576 Vendor Specific: Not Supported 00:15:31.576 Reset Timeout: 15000 ms 00:15:31.576 Doorbell Stride: 4 bytes 00:15:31.576 NVM Subsystem Reset: Not Supported 00:15:31.576 Command Sets Supported 00:15:31.576 NVM Command Set: Supported 00:15:31.576 Boot Partition: Not Supported 00:15:31.576 Memory Page Size Minimum: 4096 bytes 00:15:31.576 Memory Page Size Maximum: 4096 bytes 00:15:31.576 Persistent Memory Region: Not Supported 00:15:31.576 Optional Asynchronous Events Supported 00:15:31.576 Namespace Attribute Notices: Supported 00:15:31.576 Firmware Activation Notices: Not Supported 00:15:31.576 ANA Change Notices: Not Supported 00:15:31.576 PLE Aggregate Log Change Notices: Not Supported 00:15:31.576 LBA Status Info Alert Notices: Not Supported 00:15:31.576 EGE Aggregate Log Change Notices: Not Supported 00:15:31.576 Normal NVM Subsystem Shutdown event: Not Supported 00:15:31.576 Zone Descriptor Change Notices: Not Supported 00:15:31.576 Discovery Log Change Notices: Not Supported 00:15:31.576 Controller Attributes 00:15:31.576 128-bit Host Identifier: Supported 00:15:31.576 Non-Operational Permissive Mode: Not Supported 00:15:31.576 NVM Sets: Not Supported 00:15:31.576 Read Recovery Levels: Not Supported 00:15:31.576 Endurance Groups: Not Supported 00:15:31.576 Predictable Latency Mode: Not Supported 00:15:31.576 Traffic Based Keep ALive: Not Supported 00:15:31.576 Namespace Granularity: Not Supported 00:15:31.576 SQ Associations: Not Supported 00:15:31.576 UUID List: Not Supported 00:15:31.576 Multi-Domain Subsystem: Not Supported 00:15:31.576 Fixed Capacity Management: Not Supported 00:15:31.576 Variable Capacity Management: Not Supported 00:15:31.576 Delete Endurance Group: Not Supported 00:15:31.576 Delete NVM Set: Not Supported 00:15:31.576 Extended LBA Formats Supported: Not Supported 00:15:31.576 Flexible Data Placement Supported: Not Supported 00:15:31.576 00:15:31.576 Controller Memory Buffer Support 00:15:31.576 ================================ 00:15:31.576 Supported: No 00:15:31.576 00:15:31.576 Persistent Memory Region Support 00:15:31.576 ================================ 00:15:31.576 Supported: No 00:15:31.576 00:15:31.576 Admin Command Set Attributes 00:15:31.576 ============================ 00:15:31.576 Security Send/Receive: Not Supported 00:15:31.576 Format NVM: Not Supported 00:15:31.576 Firmware Activate/Download: Not Supported 00:15:31.576 Namespace Management: Not Supported 00:15:31.576 Device Self-Test: Not Supported 00:15:31.576 Directives: Not Supported 00:15:31.576 NVMe-MI: Not Supported 00:15:31.576 Virtualization Management: Not Supported 00:15:31.576 Doorbell Buffer Config: Not Supported 00:15:31.576 Get LBA Status Capability: Not Supported 00:15:31.576 Command & Feature Lockdown Capability: Not Supported 00:15:31.576 Abort Command Limit: 4 00:15:31.576 Async Event Request Limit: 4 00:15:31.576 Number of Firmware Slots: N/A 00:15:31.576 Firmware Slot 1 Read-Only: N/A 00:15:31.576 Firmware Activation Without Reset: N/A 00:15:31.576 Multiple Update Detection Support: N/A 00:15:31.576 Firmware Update Granularity: No Information Provided 00:15:31.576 Per-Namespace SMART Log: No 00:15:31.576 Asymmetric Namespace Access Log Page: Not Supported 00:15:31.576 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:31.576 Command Effects Log Page: Supported 00:15:31.576 Get Log Page Extended Data: Supported 00:15:31.576 Telemetry Log Pages: Not Supported 00:15:31.576 Persistent Event Log Pages: Not Supported 00:15:31.576 Supported Log Pages Log Page: May Support 00:15:31.576 Commands Supported & Effects Log Page: Not Supported 00:15:31.576 Feature Identifiers & Effects Log Page:May Support 00:15:31.576 NVMe-MI Commands & Effects Log Page: May Support 00:15:31.576 Data Area 4 for Telemetry Log: Not Supported 00:15:31.576 Error Log Page Entries Supported: 128 00:15:31.576 Keep Alive: Supported 00:15:31.576 Keep Alive Granularity: 10000 ms 00:15:31.576 00:15:31.576 NVM Command Set Attributes 00:15:31.576 ========================== 00:15:31.576 Submission Queue Entry Size 00:15:31.576 Max: 64 00:15:31.576 Min: 64 00:15:31.576 Completion Queue Entry Size 00:15:31.576 Max: 16 00:15:31.576 Min: 16 00:15:31.576 Number of Namespaces: 32 00:15:31.576 Compare Command: Supported 00:15:31.576 Write Uncorrectable Command: Not Supported 00:15:31.576 Dataset Management Command: Supported 00:15:31.576 Write Zeroes Command: Supported 00:15:31.576 Set Features Save Field: Not Supported 00:15:31.576 Reservations: Not Supported 00:15:31.576 Timestamp: Not Supported 00:15:31.576 Copy: Supported 00:15:31.576 Volatile Write Cache: Present 00:15:31.576 Atomic Write Unit (Normal): 1 00:15:31.576 Atomic Write Unit (PFail): 1 00:15:31.576 Atomic Compare & Write Unit: 1 00:15:31.576 Fused Compare & Write: Supported 00:15:31.576 Scatter-Gather List 00:15:31.576 SGL Command Set: Supported (Dword aligned) 00:15:31.576 SGL Keyed: Not Supported 00:15:31.576 SGL Bit Bucket Descriptor: Not Supported 00:15:31.576 SGL Metadata Pointer: Not Supported 00:15:31.576 Oversized SGL: Not Supported 00:15:31.576 SGL Metadata Address: Not Supported 00:15:31.576 SGL Offset: Not Supported 00:15:31.576 Transport SGL Data Block: Not Supported 00:15:31.576 Replay Protected Memory Block: Not Supported 00:15:31.576 00:15:31.576 Firmware Slot Information 00:15:31.576 ========================= 00:15:31.576 Active slot: 1 00:15:31.576 Slot 1 Firmware Revision: 25.01 00:15:31.576 00:15:31.576 00:15:31.576 Commands Supported and Effects 00:15:31.576 ============================== 00:15:31.576 Admin Commands 00:15:31.576 -------------- 00:15:31.576 Get Log Page (02h): Supported 00:15:31.576 Identify (06h): Supported 00:15:31.576 Abort (08h): Supported 00:15:31.576 Set Features (09h): Supported 00:15:31.576 Get Features (0Ah): Supported 00:15:31.576 Asynchronous Event Request (0Ch): Supported 00:15:31.576 Keep Alive (18h): Supported 00:15:31.576 I/O Commands 00:15:31.576 ------------ 00:15:31.576 Flush (00h): Supported LBA-Change 00:15:31.576 Write (01h): Supported LBA-Change 00:15:31.576 Read (02h): Supported 00:15:31.576 Compare (05h): Supported 00:15:31.576 Write Zeroes (08h): Supported LBA-Change 00:15:31.576 Dataset Management (09h): Supported LBA-Change 00:15:31.576 Copy (19h): Supported LBA-Change 00:15:31.576 00:15:31.576 Error Log 00:15:31.576 ========= 00:15:31.576 00:15:31.576 Arbitration 00:15:31.576 =========== 00:15:31.576 Arbitration Burst: 1 00:15:31.576 00:15:31.576 Power Management 00:15:31.576 ================ 00:15:31.577 Number of Power States: 1 00:15:31.577 Current Power State: Power State #0 00:15:31.577 Power State #0: 00:15:31.577 Max Power: 0.00 W 00:15:31.577 Non-Operational State: Operational 00:15:31.577 Entry Latency: Not Reported 00:15:31.577 Exit Latency: Not Reported 00:15:31.577 Relative Read Throughput: 0 00:15:31.577 Relative Read Latency: 0 00:15:31.577 Relative Write Throughput: 0 00:15:31.577 Relative Write Latency: 0 00:15:31.577 Idle Power: Not Reported 00:15:31.577 Active Power: Not Reported 00:15:31.577 Non-Operational Permissive Mode: Not Supported 00:15:31.577 00:15:31.577 Health Information 00:15:31.577 ================== 00:15:31.577 Critical Warnings: 00:15:31.577 Available Spare Space: OK 00:15:31.577 Temperature: OK 00:15:31.577 Device Reliability: OK 00:15:31.577 Read Only: No 00:15:31.577 Volatile Memory Backup: OK 00:15:31.577 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:31.577 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:31.577 Available Spare: 0% 00:15:31.577 Available Sp[2024-12-05 13:19:54.097296] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:31.577 [2024-12-05 13:19:54.097305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:31.577 [2024-12-05 13:19:54.097333] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:31.577 [2024-12-05 13:19:54.097343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.577 [2024-12-05 13:19:54.097350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.577 [2024-12-05 13:19:54.097356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.577 [2024-12-05 13:19:54.097363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:31.577 [2024-12-05 13:19:54.098380] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:31.577 [2024-12-05 13:19:54.098391] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:31.577 [2024-12-05 13:19:54.099383] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:31.577 [2024-12-05 13:19:54.099423] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:31.577 [2024-12-05 13:19:54.099430] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:31.577 [2024-12-05 13:19:54.100389] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:31.577 [2024-12-05 13:19:54.100401] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:31.577 [2024-12-05 13:19:54.100457] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:31.577 [2024-12-05 13:19:54.103869] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:31.897 are Threshold: 0% 00:15:31.897 Life Percentage Used: 0% 00:15:31.897 Data Units Read: 0 00:15:31.897 Data Units Written: 0 00:15:31.897 Host Read Commands: 0 00:15:31.897 Host Write Commands: 0 00:15:31.897 Controller Busy Time: 0 minutes 00:15:31.897 Power Cycles: 0 00:15:31.897 Power On Hours: 0 hours 00:15:31.897 Unsafe Shutdowns: 0 00:15:31.897 Unrecoverable Media Errors: 0 00:15:31.897 Lifetime Error Log Entries: 0 00:15:31.897 Warning Temperature Time: 0 minutes 00:15:31.897 Critical Temperature Time: 0 minutes 00:15:31.897 00:15:31.897 Number of Queues 00:15:31.897 ================ 00:15:31.897 Number of I/O Submission Queues: 127 00:15:31.897 Number of I/O Completion Queues: 127 00:15:31.897 00:15:31.897 Active Namespaces 00:15:31.897 ================= 00:15:31.897 Namespace ID:1 00:15:31.897 Error Recovery Timeout: Unlimited 00:15:31.897 Command Set Identifier: NVM (00h) 00:15:31.897 Deallocate: Supported 00:15:31.897 Deallocated/Unwritten Error: Not Supported 00:15:31.897 Deallocated Read Value: Unknown 00:15:31.897 Deallocate in Write Zeroes: Not Supported 00:15:31.897 Deallocated Guard Field: 0xFFFF 00:15:31.897 Flush: Supported 00:15:31.897 Reservation: Supported 00:15:31.897 Namespace Sharing Capabilities: Multiple Controllers 00:15:31.897 Size (in LBAs): 131072 (0GiB) 00:15:31.897 Capacity (in LBAs): 131072 (0GiB) 00:15:31.897 Utilization (in LBAs): 131072 (0GiB) 00:15:31.897 NGUID: 857FE08C5095462EBBDAA04EED535CE9 00:15:31.897 UUID: 857fe08c-5095-462e-bbda-a04eed535ce9 00:15:31.897 Thin Provisioning: Not Supported 00:15:31.897 Per-NS Atomic Units: Yes 00:15:31.897 Atomic Boundary Size (Normal): 0 00:15:31.897 Atomic Boundary Size (PFail): 0 00:15:31.897 Atomic Boundary Offset: 0 00:15:31.897 Maximum Single Source Range Length: 65535 00:15:31.897 Maximum Copy Length: 65535 00:15:31.897 Maximum Source Range Count: 1 00:15:31.897 NGUID/EUI64 Never Reused: No 00:15:31.897 Namespace Write Protected: No 00:15:31.897 Number of LBA Formats: 1 00:15:31.897 Current LBA Format: LBA Format #00 00:15:31.897 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:31.897 00:15:31.897 13:19:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:31.897 [2024-12-05 13:19:54.306604] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:37.237 Initializing NVMe Controllers 00:15:37.237 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:37.237 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:37.237 Initialization complete. Launching workers. 00:15:37.237 ======================================================== 00:15:37.237 Latency(us) 00:15:37.237 Device Information : IOPS MiB/s Average min max 00:15:37.237 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39999.60 156.25 3199.90 862.75 6811.84 00:15:37.237 ======================================================== 00:15:37.237 Total : 39999.60 156.25 3199.90 862.75 6811.84 00:15:37.237 00:15:37.237 [2024-12-05 13:19:59.326676] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:37.237 13:19:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:37.237 [2024-12-05 13:19:59.524612] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:42.523 Initializing NVMe Controllers 00:15:42.523 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:42.523 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:42.523 Initialization complete. Launching workers. 00:15:42.523 ======================================================== 00:15:42.523 Latency(us) 00:15:42.523 Device Information : IOPS MiB/s Average min max 00:15:42.523 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.90 62.72 7977.70 6988.37 8974.54 00:15:42.523 ======================================================== 00:15:42.523 Total : 16055.90 62.72 7977.70 6988.37 8974.54 00:15:42.523 00:15:42.523 [2024-12-05 13:20:04.565188] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:42.523 13:20:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:42.523 [2024-12-05 13:20:04.775125] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.804 [2024-12-05 13:20:09.848083] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.804 Initializing NVMe Controllers 00:15:47.804 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:47.804 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:47.804 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:47.804 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:47.804 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:47.804 Initialization complete. Launching workers. 00:15:47.804 Starting thread on core 2 00:15:47.804 Starting thread on core 3 00:15:47.804 Starting thread on core 1 00:15:47.804 13:20:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:47.804 [2024-12-05 13:20:10.135681] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.102 [2024-12-05 13:20:13.185403] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.102 Initializing NVMe Controllers 00:15:51.102 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:51.102 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:51.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:51.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:51.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:51.102 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:51.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:51.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:51.102 Initialization complete. Launching workers. 00:15:51.102 Starting thread on core 1 with urgent priority queue 00:15:51.102 Starting thread on core 2 with urgent priority queue 00:15:51.102 Starting thread on core 3 with urgent priority queue 00:15:51.102 Starting thread on core 0 with urgent priority queue 00:15:51.102 SPDK bdev Controller (SPDK1 ) core 0: 6392.67 IO/s 15.64 secs/100000 ios 00:15:51.102 SPDK bdev Controller (SPDK1 ) core 1: 5635.67 IO/s 17.74 secs/100000 ios 00:15:51.102 SPDK bdev Controller (SPDK1 ) core 2: 4568.33 IO/s 21.89 secs/100000 ios 00:15:51.102 SPDK bdev Controller (SPDK1 ) core 3: 5769.33 IO/s 17.33 secs/100000 ios 00:15:51.102 ======================================================== 00:15:51.102 00:15:51.102 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:51.102 [2024-12-05 13:20:13.480263] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.102 Initializing NVMe Controllers 00:15:51.102 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:51.102 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:51.102 Namespace ID: 1 size: 0GB 00:15:51.102 Initialization complete. 00:15:51.102 INFO: using host memory buffer for IO 00:15:51.102 Hello world! 00:15:51.102 [2024-12-05 13:20:13.514454] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.102 13:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:51.362 [2024-12-05 13:20:13.808258] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:52.302 Initializing NVMe Controllers 00:15:52.302 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.302 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:52.302 Initialization complete. Launching workers. 00:15:52.302 submit (in ns) avg, min, max = 7907.6, 3906.7, 4000401.7 00:15:52.302 complete (in ns) avg, min, max = 17628.5, 2383.3, 4033535.0 00:15:52.302 00:15:52.302 Submit histogram 00:15:52.302 ================ 00:15:52.302 Range in us Cumulative Count 00:15:52.302 3.893 - 3.920: 0.2030% ( 38) 00:15:52.302 3.920 - 3.947: 2.5751% ( 444) 00:15:52.302 3.947 - 3.973: 9.7072% ( 1335) 00:15:52.302 3.973 - 4.000: 20.3173% ( 1986) 00:15:52.302 4.000 - 4.027: 33.2354% ( 2418) 00:15:52.302 4.027 - 4.053: 46.8426% ( 2547) 00:15:52.302 4.053 - 4.080: 63.1371% ( 3050) 00:15:52.302 4.080 - 4.107: 78.5928% ( 2893) 00:15:52.302 4.107 - 4.133: 88.9358% ( 1936) 00:15:52.302 4.133 - 4.160: 95.6512% ( 1257) 00:15:52.302 4.160 - 4.187: 98.2637% ( 489) 00:15:52.302 4.187 - 4.213: 99.1612% ( 168) 00:15:52.302 4.213 - 4.240: 99.4070% ( 46) 00:15:52.302 4.240 - 4.267: 99.4764% ( 13) 00:15:52.302 4.267 - 4.293: 99.4871% ( 2) 00:15:52.302 4.293 - 4.320: 99.4925% ( 1) 00:15:52.302 4.320 - 4.347: 99.5032% ( 2) 00:15:52.302 4.533 - 4.560: 99.5085% ( 1) 00:15:52.302 4.693 - 4.720: 99.5138% ( 1) 00:15:52.302 4.773 - 4.800: 99.5192% ( 1) 00:15:52.302 4.827 - 4.853: 99.5245% ( 1) 00:15:52.302 4.853 - 4.880: 99.5299% ( 1) 00:15:52.302 4.960 - 4.987: 99.5352% ( 1) 00:15:52.302 5.040 - 5.067: 99.5405% ( 1) 00:15:52.302 5.253 - 5.280: 99.5459% ( 1) 00:15:52.302 5.520 - 5.547: 99.5512% ( 1) 00:15:52.302 5.680 - 5.707: 99.5566% ( 1) 00:15:52.302 5.840 - 5.867: 99.5619% ( 1) 00:15:52.302 5.893 - 5.920: 99.5673% ( 1) 00:15:52.302 5.947 - 5.973: 99.5726% ( 1) 00:15:52.302 6.000 - 6.027: 99.5779% ( 1) 00:15:52.302 6.027 - 6.053: 99.5833% ( 1) 00:15:52.302 6.053 - 6.080: 99.5886% ( 1) 00:15:52.302 6.107 - 6.133: 99.6260% ( 7) 00:15:52.302 6.133 - 6.160: 99.6367% ( 2) 00:15:52.302 6.187 - 6.213: 99.6421% ( 1) 00:15:52.302 6.213 - 6.240: 99.6474% ( 1) 00:15:52.302 6.240 - 6.267: 99.6527% ( 1) 00:15:52.302 6.267 - 6.293: 99.6581% ( 1) 00:15:52.302 6.320 - 6.347: 99.6634% ( 1) 00:15:52.302 6.347 - 6.373: 99.6741% ( 2) 00:15:52.302 6.373 - 6.400: 99.6795% ( 1) 00:15:52.302 6.400 - 6.427: 99.6848% ( 1) 00:15:52.302 6.480 - 6.507: 99.6901% ( 1) 00:15:52.302 6.507 - 6.533: 99.7008% ( 2) 00:15:52.302 6.533 - 6.560: 99.7062% ( 1) 00:15:52.302 6.560 - 6.587: 99.7115% ( 1) 00:15:52.302 6.640 - 6.667: 99.7169% ( 1) 00:15:52.302 6.720 - 6.747: 99.7222% ( 1) 00:15:52.302 6.827 - 6.880: 99.7275% ( 1) 00:15:52.302 6.880 - 6.933: 99.7382% ( 2) 00:15:52.302 6.987 - 7.040: 99.7489% ( 2) 00:15:52.302 7.040 - 7.093: 99.7596% ( 2) 00:15:52.302 7.093 - 7.147: 99.7649% ( 1) 00:15:52.302 7.147 - 7.200: 99.7703% ( 1) 00:15:52.302 7.200 - 7.253: 99.7756% ( 1) 00:15:52.302 7.253 - 7.307: 99.7863% ( 2) 00:15:52.302 7.307 - 7.360: 99.7970% ( 2) 00:15:52.302 7.413 - 7.467: 99.8184% ( 4) 00:15:52.302 7.573 - 7.627: 99.8344% ( 3) 00:15:52.302 7.680 - 7.733: 99.8504% ( 3) 00:15:52.302 7.733 - 7.787: 99.8558% ( 1) 00:15:52.302 7.787 - 7.840: 99.8718% ( 3) 00:15:52.302 7.840 - 7.893: 99.8771% ( 1) 00:15:52.302 8.213 - 8.267: 99.8825% ( 1) 00:15:52.302 8.640 - 8.693: 99.8878% ( 1) 00:15:52.302 8.693 - 8.747: 99.8932% ( 1) 00:15:52.302 9.067 - 9.120: 99.9038% ( 2) 00:15:52.302 3986.773 - 4014.080: 100.0000% ( 18) 00:15:52.302 00:15:52.302 Complete histogram 00:15:52.302 ================== 00:15:52.302 Range in us Cumulative Count 00:15:52.302 2.373 - 2.387: 0.0053% ( 1) 00:15:52.302 2.387 - 2.400: 1.7256% ( 322) 00:15:52.302 2.400 - 2.413: 1.9019% ( 33) 00:15:52.302 2.413 - 2.427: 2.2492% ( 65) 00:15:52.302 2.427 - 2.440: 2.5323% ( 53) 00:15:52.302 2.440 - 2.453: 52.9490% ( 9437) 00:15:52.302 2.453 - 2.467: 63.2920% ( 1936) 00:15:52.302 2.467 - 2.480: 76.0338% ( 2385) 00:15:52.302 2.480 - [2024-12-05 13:20:14.822586] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:52.562 2.493: 80.3131% ( 801) 00:15:52.562 2.493 - 2.507: 81.5472% ( 231) 00:15:52.562 2.507 - 2.520: 85.5540% ( 750) 00:15:52.562 2.520 - 2.533: 92.2160% ( 1247) 00:15:52.562 2.533 - 2.547: 96.1748% ( 741) 00:15:52.562 2.547 - 2.560: 98.0660% ( 354) 00:15:52.562 2.560 - 2.573: 98.9689% ( 169) 00:15:52.562 2.573 - 2.587: 99.3482% ( 71) 00:15:52.562 2.587 - 2.600: 99.4497% ( 19) 00:15:52.562 2.600 - 2.613: 99.4551% ( 1) 00:15:52.562 2.613 - 2.627: 99.4604% ( 1) 00:15:52.562 2.627 - 2.640: 99.4658% ( 1) 00:15:52.562 2.680 - 2.693: 99.4711% ( 1) 00:15:52.562 4.293 - 4.320: 99.4764% ( 1) 00:15:52.562 4.347 - 4.373: 99.4818% ( 1) 00:15:52.562 4.587 - 4.613: 99.4871% ( 1) 00:15:52.562 4.640 - 4.667: 99.4925% ( 1) 00:15:52.562 4.667 - 4.693: 99.4978% ( 1) 00:15:52.563 4.773 - 4.800: 99.5032% ( 1) 00:15:52.563 4.827 - 4.853: 99.5085% ( 1) 00:15:52.563 4.880 - 4.907: 99.5138% ( 1) 00:15:52.563 4.933 - 4.960: 99.5192% ( 1) 00:15:52.563 4.960 - 4.987: 99.5245% ( 1) 00:15:52.563 4.987 - 5.013: 99.5299% ( 1) 00:15:52.563 5.147 - 5.173: 99.5352% ( 1) 00:15:52.563 5.227 - 5.253: 99.5405% ( 1) 00:15:52.563 5.280 - 5.307: 99.5459% ( 1) 00:15:52.563 5.360 - 5.387: 99.5512% ( 1) 00:15:52.563 5.547 - 5.573: 99.5566% ( 1) 00:15:52.563 5.600 - 5.627: 99.5619% ( 1) 00:15:52.563 5.893 - 5.920: 99.5673% ( 1) 00:15:52.563 5.920 - 5.947: 99.5726% ( 1) 00:15:52.563 5.973 - 6.000: 99.5779% ( 1) 00:15:52.563 6.107 - 6.133: 99.5833% ( 1) 00:15:52.563 6.160 - 6.187: 99.5886% ( 1) 00:15:52.563 7.040 - 7.093: 99.5940% ( 1) 00:15:52.563 10.453 - 10.507: 99.5993% ( 1) 00:15:52.563 12.000 - 12.053: 99.6047% ( 1) 00:15:52.563 12.213 - 12.267: 99.6100% ( 1) 00:15:52.563 12.320 - 12.373: 99.6153% ( 1) 00:15:52.563 12.533 - 12.587: 99.6207% ( 1) 00:15:52.563 3986.773 - 4014.080: 99.9947% ( 70) 00:15:52.563 4014.080 - 4041.387: 100.0000% ( 1) 00:15:52.563 00:15:52.563 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:52.563 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:52.563 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:52.563 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:52.563 13:20:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:52.563 [ 00:15:52.563 { 00:15:52.563 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:52.563 "subtype": "Discovery", 00:15:52.563 "listen_addresses": [], 00:15:52.563 "allow_any_host": true, 00:15:52.563 "hosts": [] 00:15:52.563 }, 00:15:52.563 { 00:15:52.563 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:52.563 "subtype": "NVMe", 00:15:52.563 "listen_addresses": [ 00:15:52.563 { 00:15:52.563 "trtype": "VFIOUSER", 00:15:52.563 "adrfam": "IPv4", 00:15:52.563 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:52.563 "trsvcid": "0" 00:15:52.563 } 00:15:52.563 ], 00:15:52.563 "allow_any_host": true, 00:15:52.563 "hosts": [], 00:15:52.563 "serial_number": "SPDK1", 00:15:52.563 "model_number": "SPDK bdev Controller", 00:15:52.563 "max_namespaces": 32, 00:15:52.563 "min_cntlid": 1, 00:15:52.563 "max_cntlid": 65519, 00:15:52.563 "namespaces": [ 00:15:52.563 { 00:15:52.563 "nsid": 1, 00:15:52.563 "bdev_name": "Malloc1", 00:15:52.563 "name": "Malloc1", 00:15:52.563 "nguid": "857FE08C5095462EBBDAA04EED535CE9", 00:15:52.563 "uuid": "857fe08c-5095-462e-bbda-a04eed535ce9" 00:15:52.563 } 00:15:52.563 ] 00:15:52.563 }, 00:15:52.563 { 00:15:52.563 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:52.563 "subtype": "NVMe", 00:15:52.563 "listen_addresses": [ 00:15:52.563 { 00:15:52.563 "trtype": "VFIOUSER", 00:15:52.563 "adrfam": "IPv4", 00:15:52.563 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:52.563 "trsvcid": "0" 00:15:52.563 } 00:15:52.563 ], 00:15:52.563 "allow_any_host": true, 00:15:52.563 "hosts": [], 00:15:52.563 "serial_number": "SPDK2", 00:15:52.563 "model_number": "SPDK bdev Controller", 00:15:52.563 "max_namespaces": 32, 00:15:52.563 "min_cntlid": 1, 00:15:52.563 "max_cntlid": 65519, 00:15:52.563 "namespaces": [ 00:15:52.563 { 00:15:52.563 "nsid": 1, 00:15:52.563 "bdev_name": "Malloc2", 00:15:52.563 "name": "Malloc2", 00:15:52.563 "nguid": "98C2BEF69DEB4EAEBE1A4C5E0C38ED79", 00:15:52.563 "uuid": "98c2bef6-9deb-4eae-be1a-4c5e0c38ed79" 00:15:52.563 } 00:15:52.563 ] 00:15:52.563 } 00:15:52.563 ] 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=883849 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:52.563 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:52.823 Malloc3 00:15:52.823 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:52.823 [2024-12-05 13:20:15.255607] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:53.083 [2024-12-05 13:20:15.411658] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:53.083 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:53.083 Asynchronous Event Request test 00:15:53.083 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:53.083 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:53.083 Registering asynchronous event callbacks... 00:15:53.083 Starting namespace attribute notice tests for all controllers... 00:15:53.083 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:53.083 aer_cb - Changed Namespace 00:15:53.083 Cleaning up... 00:15:53.083 [ 00:15:53.083 { 00:15:53.083 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:53.083 "subtype": "Discovery", 00:15:53.083 "listen_addresses": [], 00:15:53.083 "allow_any_host": true, 00:15:53.083 "hosts": [] 00:15:53.083 }, 00:15:53.083 { 00:15:53.083 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:53.083 "subtype": "NVMe", 00:15:53.083 "listen_addresses": [ 00:15:53.083 { 00:15:53.083 "trtype": "VFIOUSER", 00:15:53.083 "adrfam": "IPv4", 00:15:53.083 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:53.083 "trsvcid": "0" 00:15:53.083 } 00:15:53.084 ], 00:15:53.084 "allow_any_host": true, 00:15:53.084 "hosts": [], 00:15:53.084 "serial_number": "SPDK1", 00:15:53.084 "model_number": "SPDK bdev Controller", 00:15:53.084 "max_namespaces": 32, 00:15:53.084 "min_cntlid": 1, 00:15:53.084 "max_cntlid": 65519, 00:15:53.084 "namespaces": [ 00:15:53.084 { 00:15:53.084 "nsid": 1, 00:15:53.084 "bdev_name": "Malloc1", 00:15:53.084 "name": "Malloc1", 00:15:53.084 "nguid": "857FE08C5095462EBBDAA04EED535CE9", 00:15:53.084 "uuid": "857fe08c-5095-462e-bbda-a04eed535ce9" 00:15:53.084 }, 00:15:53.084 { 00:15:53.084 "nsid": 2, 00:15:53.084 "bdev_name": "Malloc3", 00:15:53.084 "name": "Malloc3", 00:15:53.084 "nguid": "420D4242FE584D46BDFA0E46A5B89BB4", 00:15:53.084 "uuid": "420d4242-fe58-4d46-bdfa-0e46a5b89bb4" 00:15:53.084 } 00:15:53.084 ] 00:15:53.084 }, 00:15:53.084 { 00:15:53.084 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:53.084 "subtype": "NVMe", 00:15:53.084 "listen_addresses": [ 00:15:53.084 { 00:15:53.084 "trtype": "VFIOUSER", 00:15:53.084 "adrfam": "IPv4", 00:15:53.084 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:53.084 "trsvcid": "0" 00:15:53.084 } 00:15:53.084 ], 00:15:53.084 "allow_any_host": true, 00:15:53.084 "hosts": [], 00:15:53.084 "serial_number": "SPDK2", 00:15:53.084 "model_number": "SPDK bdev Controller", 00:15:53.084 "max_namespaces": 32, 00:15:53.084 "min_cntlid": 1, 00:15:53.084 "max_cntlid": 65519, 00:15:53.084 "namespaces": [ 00:15:53.084 { 00:15:53.084 "nsid": 1, 00:15:53.084 "bdev_name": "Malloc2", 00:15:53.084 "name": "Malloc2", 00:15:53.084 "nguid": "98C2BEF69DEB4EAEBE1A4C5E0C38ED79", 00:15:53.084 "uuid": "98c2bef6-9deb-4eae-be1a-4c5e0c38ed79" 00:15:53.084 } 00:15:53.084 ] 00:15:53.084 } 00:15:53.084 ] 00:15:53.084 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 883849 00:15:53.084 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:53.084 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:53.084 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:53.084 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:53.346 [2024-12-05 13:20:15.653064] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:15:53.346 [2024-12-05 13:20:15.653147] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid883975 ] 00:15:53.346 [2024-12-05 13:20:15.715079] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:53.346 [2024-12-05 13:20:15.721044] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:53.346 [2024-12-05 13:20:15.721070] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff8fcfb0000 00:15:53.346 [2024-12-05 13:20:15.722044] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.723054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.724054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.725056] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.726062] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.727071] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.728076] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.729086] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:53.346 [2024-12-05 13:20:15.730096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:53.346 [2024-12-05 13:20:15.730108] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff8fcfa5000 00:15:53.346 [2024-12-05 13:20:15.731489] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:53.346 [2024-12-05 13:20:15.751404] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:53.346 [2024-12-05 13:20:15.751432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:53.346 [2024-12-05 13:20:15.753478] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:53.346 [2024-12-05 13:20:15.753530] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:53.346 [2024-12-05 13:20:15.753617] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:53.346 [2024-12-05 13:20:15.753631] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:53.346 [2024-12-05 13:20:15.753637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:53.346 [2024-12-05 13:20:15.754484] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:53.346 [2024-12-05 13:20:15.754497] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:53.346 [2024-12-05 13:20:15.754504] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:53.346 [2024-12-05 13:20:15.755485] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:53.346 [2024-12-05 13:20:15.755494] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:53.346 [2024-12-05 13:20:15.755502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:53.346 [2024-12-05 13:20:15.756497] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:53.346 [2024-12-05 13:20:15.756506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:53.346 [2024-12-05 13:20:15.757499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:53.346 [2024-12-05 13:20:15.757508] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:53.346 [2024-12-05 13:20:15.757513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:53.346 [2024-12-05 13:20:15.757520] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:53.346 [2024-12-05 13:20:15.757628] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:53.346 [2024-12-05 13:20:15.757634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:53.346 [2024-12-05 13:20:15.757639] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:53.346 [2024-12-05 13:20:15.758505] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:53.346 [2024-12-05 13:20:15.761871] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:53.346 [2024-12-05 13:20:15.762536] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:53.346 [2024-12-05 13:20:15.763536] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:53.346 [2024-12-05 13:20:15.763577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:53.346 [2024-12-05 13:20:15.764545] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:53.346 [2024-12-05 13:20:15.764557] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:53.346 [2024-12-05 13:20:15.764562] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:53.346 [2024-12-05 13:20:15.764584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:53.346 [2024-12-05 13:20:15.764591] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:53.346 [2024-12-05 13:20:15.764607] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:53.347 [2024-12-05 13:20:15.764613] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.347 [2024-12-05 13:20:15.764616] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.347 [2024-12-05 13:20:15.764629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.772870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.772883] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:53.347 [2024-12-05 13:20:15.772888] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:53.347 [2024-12-05 13:20:15.772893] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:53.347 [2024-12-05 13:20:15.772897] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:53.347 [2024-12-05 13:20:15.772902] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:53.347 [2024-12-05 13:20:15.772907] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:53.347 [2024-12-05 13:20:15.772912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.772920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.772931] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.780868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.780882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.347 [2024-12-05 13:20:15.780891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.347 [2024-12-05 13:20:15.780900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.347 [2024-12-05 13:20:15.780908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:53.347 [2024-12-05 13:20:15.780913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.780923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.780934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.788868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.788876] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:53.347 [2024-12-05 13:20:15.788882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.788891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.788897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.788906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.796870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.796939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.796948] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.796956] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:53.347 [2024-12-05 13:20:15.796961] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:53.347 [2024-12-05 13:20:15.796965] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.347 [2024-12-05 13:20:15.796971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.804870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.804890] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:53.347 [2024-12-05 13:20:15.804898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.804906] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.804913] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:53.347 [2024-12-05 13:20:15.804918] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.347 [2024-12-05 13:20:15.804921] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.347 [2024-12-05 13:20:15.804928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.812869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.812882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.812890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.812900] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:53.347 [2024-12-05 13:20:15.812905] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.347 [2024-12-05 13:20:15.812908] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.347 [2024-12-05 13:20:15.812914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.820869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.820882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.820889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.820896] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.820903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.820908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.820913] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.820918] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:53.347 [2024-12-05 13:20:15.820923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:53.347 [2024-12-05 13:20:15.820928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:53.347 [2024-12-05 13:20:15.820945] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.828871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.828885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.836870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.836884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.844867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.844881] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.852870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:53.347 [2024-12-05 13:20:15.852886] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:53.347 [2024-12-05 13:20:15.852891] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:53.347 [2024-12-05 13:20:15.852895] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:53.347 [2024-12-05 13:20:15.852899] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:53.347 [2024-12-05 13:20:15.852902] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:53.347 [2024-12-05 13:20:15.852910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:53.347 [2024-12-05 13:20:15.852918] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:53.347 [2024-12-05 13:20:15.852923] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:53.347 [2024-12-05 13:20:15.852926] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.347 [2024-12-05 13:20:15.852932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.852940] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:53.347 [2024-12-05 13:20:15.852944] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:53.347 [2024-12-05 13:20:15.852947] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.347 [2024-12-05 13:20:15.852953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:53.347 [2024-12-05 13:20:15.852961] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:53.348 [2024-12-05 13:20:15.852966] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:53.348 [2024-12-05 13:20:15.852969] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:53.348 [2024-12-05 13:20:15.852975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:53.348 [2024-12-05 13:20:15.860869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:53.348 [2024-12-05 13:20:15.860885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:53.348 [2024-12-05 13:20:15.860896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:53.348 [2024-12-05 13:20:15.860903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:53.348 ===================================================== 00:15:53.348 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:53.348 ===================================================== 00:15:53.348 Controller Capabilities/Features 00:15:53.348 ================================ 00:15:53.348 Vendor ID: 4e58 00:15:53.348 Subsystem Vendor ID: 4e58 00:15:53.348 Serial Number: SPDK2 00:15:53.348 Model Number: SPDK bdev Controller 00:15:53.348 Firmware Version: 25.01 00:15:53.348 Recommended Arb Burst: 6 00:15:53.348 IEEE OUI Identifier: 8d 6b 50 00:15:53.348 Multi-path I/O 00:15:53.348 May have multiple subsystem ports: Yes 00:15:53.348 May have multiple controllers: Yes 00:15:53.348 Associated with SR-IOV VF: No 00:15:53.348 Max Data Transfer Size: 131072 00:15:53.348 Max Number of Namespaces: 32 00:15:53.348 Max Number of I/O Queues: 127 00:15:53.348 NVMe Specification Version (VS): 1.3 00:15:53.348 NVMe Specification Version (Identify): 1.3 00:15:53.348 Maximum Queue Entries: 256 00:15:53.348 Contiguous Queues Required: Yes 00:15:53.348 Arbitration Mechanisms Supported 00:15:53.348 Weighted Round Robin: Not Supported 00:15:53.348 Vendor Specific: Not Supported 00:15:53.348 Reset Timeout: 15000 ms 00:15:53.348 Doorbell Stride: 4 bytes 00:15:53.348 NVM Subsystem Reset: Not Supported 00:15:53.348 Command Sets Supported 00:15:53.348 NVM Command Set: Supported 00:15:53.348 Boot Partition: Not Supported 00:15:53.348 Memory Page Size Minimum: 4096 bytes 00:15:53.348 Memory Page Size Maximum: 4096 bytes 00:15:53.348 Persistent Memory Region: Not Supported 00:15:53.348 Optional Asynchronous Events Supported 00:15:53.348 Namespace Attribute Notices: Supported 00:15:53.348 Firmware Activation Notices: Not Supported 00:15:53.348 ANA Change Notices: Not Supported 00:15:53.348 PLE Aggregate Log Change Notices: Not Supported 00:15:53.348 LBA Status Info Alert Notices: Not Supported 00:15:53.348 EGE Aggregate Log Change Notices: Not Supported 00:15:53.348 Normal NVM Subsystem Shutdown event: Not Supported 00:15:53.348 Zone Descriptor Change Notices: Not Supported 00:15:53.348 Discovery Log Change Notices: Not Supported 00:15:53.348 Controller Attributes 00:15:53.348 128-bit Host Identifier: Supported 00:15:53.348 Non-Operational Permissive Mode: Not Supported 00:15:53.348 NVM Sets: Not Supported 00:15:53.348 Read Recovery Levels: Not Supported 00:15:53.348 Endurance Groups: Not Supported 00:15:53.348 Predictable Latency Mode: Not Supported 00:15:53.348 Traffic Based Keep ALive: Not Supported 00:15:53.348 Namespace Granularity: Not Supported 00:15:53.348 SQ Associations: Not Supported 00:15:53.348 UUID List: Not Supported 00:15:53.348 Multi-Domain Subsystem: Not Supported 00:15:53.348 Fixed Capacity Management: Not Supported 00:15:53.348 Variable Capacity Management: Not Supported 00:15:53.348 Delete Endurance Group: Not Supported 00:15:53.348 Delete NVM Set: Not Supported 00:15:53.348 Extended LBA Formats Supported: Not Supported 00:15:53.348 Flexible Data Placement Supported: Not Supported 00:15:53.348 00:15:53.348 Controller Memory Buffer Support 00:15:53.348 ================================ 00:15:53.348 Supported: No 00:15:53.348 00:15:53.348 Persistent Memory Region Support 00:15:53.348 ================================ 00:15:53.348 Supported: No 00:15:53.348 00:15:53.348 Admin Command Set Attributes 00:15:53.348 ============================ 00:15:53.348 Security Send/Receive: Not Supported 00:15:53.348 Format NVM: Not Supported 00:15:53.348 Firmware Activate/Download: Not Supported 00:15:53.348 Namespace Management: Not Supported 00:15:53.348 Device Self-Test: Not Supported 00:15:53.348 Directives: Not Supported 00:15:53.348 NVMe-MI: Not Supported 00:15:53.348 Virtualization Management: Not Supported 00:15:53.348 Doorbell Buffer Config: Not Supported 00:15:53.348 Get LBA Status Capability: Not Supported 00:15:53.348 Command & Feature Lockdown Capability: Not Supported 00:15:53.348 Abort Command Limit: 4 00:15:53.348 Async Event Request Limit: 4 00:15:53.348 Number of Firmware Slots: N/A 00:15:53.348 Firmware Slot 1 Read-Only: N/A 00:15:53.348 Firmware Activation Without Reset: N/A 00:15:53.348 Multiple Update Detection Support: N/A 00:15:53.348 Firmware Update Granularity: No Information Provided 00:15:53.348 Per-Namespace SMART Log: No 00:15:53.348 Asymmetric Namespace Access Log Page: Not Supported 00:15:53.348 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:53.348 Command Effects Log Page: Supported 00:15:53.348 Get Log Page Extended Data: Supported 00:15:53.348 Telemetry Log Pages: Not Supported 00:15:53.348 Persistent Event Log Pages: Not Supported 00:15:53.348 Supported Log Pages Log Page: May Support 00:15:53.348 Commands Supported & Effects Log Page: Not Supported 00:15:53.348 Feature Identifiers & Effects Log Page:May Support 00:15:53.348 NVMe-MI Commands & Effects Log Page: May Support 00:15:53.348 Data Area 4 for Telemetry Log: Not Supported 00:15:53.348 Error Log Page Entries Supported: 128 00:15:53.348 Keep Alive: Supported 00:15:53.348 Keep Alive Granularity: 10000 ms 00:15:53.348 00:15:53.348 NVM Command Set Attributes 00:15:53.348 ========================== 00:15:53.348 Submission Queue Entry Size 00:15:53.348 Max: 64 00:15:53.348 Min: 64 00:15:53.348 Completion Queue Entry Size 00:15:53.348 Max: 16 00:15:53.348 Min: 16 00:15:53.348 Number of Namespaces: 32 00:15:53.348 Compare Command: Supported 00:15:53.348 Write Uncorrectable Command: Not Supported 00:15:53.348 Dataset Management Command: Supported 00:15:53.348 Write Zeroes Command: Supported 00:15:53.348 Set Features Save Field: Not Supported 00:15:53.348 Reservations: Not Supported 00:15:53.348 Timestamp: Not Supported 00:15:53.348 Copy: Supported 00:15:53.348 Volatile Write Cache: Present 00:15:53.348 Atomic Write Unit (Normal): 1 00:15:53.348 Atomic Write Unit (PFail): 1 00:15:53.348 Atomic Compare & Write Unit: 1 00:15:53.348 Fused Compare & Write: Supported 00:15:53.348 Scatter-Gather List 00:15:53.348 SGL Command Set: Supported (Dword aligned) 00:15:53.348 SGL Keyed: Not Supported 00:15:53.348 SGL Bit Bucket Descriptor: Not Supported 00:15:53.348 SGL Metadata Pointer: Not Supported 00:15:53.348 Oversized SGL: Not Supported 00:15:53.348 SGL Metadata Address: Not Supported 00:15:53.348 SGL Offset: Not Supported 00:15:53.348 Transport SGL Data Block: Not Supported 00:15:53.348 Replay Protected Memory Block: Not Supported 00:15:53.348 00:15:53.348 Firmware Slot Information 00:15:53.348 ========================= 00:15:53.348 Active slot: 1 00:15:53.348 Slot 1 Firmware Revision: 25.01 00:15:53.348 00:15:53.348 00:15:53.348 Commands Supported and Effects 00:15:53.348 ============================== 00:15:53.348 Admin Commands 00:15:53.348 -------------- 00:15:53.348 Get Log Page (02h): Supported 00:15:53.348 Identify (06h): Supported 00:15:53.348 Abort (08h): Supported 00:15:53.348 Set Features (09h): Supported 00:15:53.348 Get Features (0Ah): Supported 00:15:53.348 Asynchronous Event Request (0Ch): Supported 00:15:53.348 Keep Alive (18h): Supported 00:15:53.348 I/O Commands 00:15:53.348 ------------ 00:15:53.348 Flush (00h): Supported LBA-Change 00:15:53.348 Write (01h): Supported LBA-Change 00:15:53.348 Read (02h): Supported 00:15:53.348 Compare (05h): Supported 00:15:53.348 Write Zeroes (08h): Supported LBA-Change 00:15:53.348 Dataset Management (09h): Supported LBA-Change 00:15:53.348 Copy (19h): Supported LBA-Change 00:15:53.348 00:15:53.348 Error Log 00:15:53.348 ========= 00:15:53.348 00:15:53.348 Arbitration 00:15:53.348 =========== 00:15:53.348 Arbitration Burst: 1 00:15:53.348 00:15:53.348 Power Management 00:15:53.348 ================ 00:15:53.348 Number of Power States: 1 00:15:53.348 Current Power State: Power State #0 00:15:53.348 Power State #0: 00:15:53.349 Max Power: 0.00 W 00:15:53.349 Non-Operational State: Operational 00:15:53.349 Entry Latency: Not Reported 00:15:53.349 Exit Latency: Not Reported 00:15:53.349 Relative Read Throughput: 0 00:15:53.349 Relative Read Latency: 0 00:15:53.349 Relative Write Throughput: 0 00:15:53.349 Relative Write Latency: 0 00:15:53.349 Idle Power: Not Reported 00:15:53.349 Active Power: Not Reported 00:15:53.349 Non-Operational Permissive Mode: Not Supported 00:15:53.349 00:15:53.349 Health Information 00:15:53.349 ================== 00:15:53.349 Critical Warnings: 00:15:53.349 Available Spare Space: OK 00:15:53.349 Temperature: OK 00:15:53.349 Device Reliability: OK 00:15:53.349 Read Only: No 00:15:53.349 Volatile Memory Backup: OK 00:15:53.349 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:53.349 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:53.349 Available Spare: 0% 00:15:53.349 Available Sp[2024-12-05 13:20:15.861004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:53.349 [2024-12-05 13:20:15.868875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:53.349 [2024-12-05 13:20:15.868909] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:53.349 [2024-12-05 13:20:15.868919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.349 [2024-12-05 13:20:15.868926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.349 [2024-12-05 13:20:15.868932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.349 [2024-12-05 13:20:15.868939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:53.349 [2024-12-05 13:20:15.868991] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:53.349 [2024-12-05 13:20:15.869003] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:53.349 [2024-12-05 13:20:15.869998] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:53.349 [2024-12-05 13:20:15.870051] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:53.349 [2024-12-05 13:20:15.870058] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:53.349 [2024-12-05 13:20:15.871000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:53.349 [2024-12-05 13:20:15.871013] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:53.349 [2024-12-05 13:20:15.871063] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:53.349 [2024-12-05 13:20:15.872439] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:53.608 are Threshold: 0% 00:15:53.608 Life Percentage Used: 0% 00:15:53.608 Data Units Read: 0 00:15:53.608 Data Units Written: 0 00:15:53.608 Host Read Commands: 0 00:15:53.608 Host Write Commands: 0 00:15:53.608 Controller Busy Time: 0 minutes 00:15:53.608 Power Cycles: 0 00:15:53.608 Power On Hours: 0 hours 00:15:53.608 Unsafe Shutdowns: 0 00:15:53.608 Unrecoverable Media Errors: 0 00:15:53.608 Lifetime Error Log Entries: 0 00:15:53.608 Warning Temperature Time: 0 minutes 00:15:53.608 Critical Temperature Time: 0 minutes 00:15:53.608 00:15:53.608 Number of Queues 00:15:53.608 ================ 00:15:53.608 Number of I/O Submission Queues: 127 00:15:53.608 Number of I/O Completion Queues: 127 00:15:53.608 00:15:53.608 Active Namespaces 00:15:53.608 ================= 00:15:53.608 Namespace ID:1 00:15:53.608 Error Recovery Timeout: Unlimited 00:15:53.608 Command Set Identifier: NVM (00h) 00:15:53.608 Deallocate: Supported 00:15:53.608 Deallocated/Unwritten Error: Not Supported 00:15:53.608 Deallocated Read Value: Unknown 00:15:53.608 Deallocate in Write Zeroes: Not Supported 00:15:53.608 Deallocated Guard Field: 0xFFFF 00:15:53.608 Flush: Supported 00:15:53.608 Reservation: Supported 00:15:53.608 Namespace Sharing Capabilities: Multiple Controllers 00:15:53.608 Size (in LBAs): 131072 (0GiB) 00:15:53.608 Capacity (in LBAs): 131072 (0GiB) 00:15:53.608 Utilization (in LBAs): 131072 (0GiB) 00:15:53.608 NGUID: 98C2BEF69DEB4EAEBE1A4C5E0C38ED79 00:15:53.608 UUID: 98c2bef6-9deb-4eae-be1a-4c5e0c38ed79 00:15:53.608 Thin Provisioning: Not Supported 00:15:53.608 Per-NS Atomic Units: Yes 00:15:53.608 Atomic Boundary Size (Normal): 0 00:15:53.608 Atomic Boundary Size (PFail): 0 00:15:53.608 Atomic Boundary Offset: 0 00:15:53.608 Maximum Single Source Range Length: 65535 00:15:53.608 Maximum Copy Length: 65535 00:15:53.608 Maximum Source Range Count: 1 00:15:53.608 NGUID/EUI64 Never Reused: No 00:15:53.608 Namespace Write Protected: No 00:15:53.608 Number of LBA Formats: 1 00:15:53.608 Current LBA Format: LBA Format #00 00:15:53.608 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:53.608 00:15:53.608 13:20:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:53.609 [2024-12-05 13:20:16.072246] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.890 Initializing NVMe Controllers 00:15:58.890 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.890 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:58.890 Initialization complete. Launching workers. 00:15:58.890 ======================================================== 00:15:58.890 Latency(us) 00:15:58.890 Device Information : IOPS MiB/s Average min max 00:15:58.890 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39960.48 156.10 3203.03 862.29 6820.97 00:15:58.890 ======================================================== 00:15:58.890 Total : 39960.48 156.10 3203.03 862.29 6820.97 00:15:58.890 00:15:58.890 [2024-12-05 13:20:21.180074] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.890 13:20:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:58.890 [2024-12-05 13:20:21.368640] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.176 Initializing NVMe Controllers 00:16:04.176 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.176 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:04.176 Initialization complete. Launching workers. 00:16:04.176 ======================================================== 00:16:04.176 Latency(us) 00:16:04.176 Device Information : IOPS MiB/s Average min max 00:16:04.176 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34193.40 133.57 3744.91 1127.64 8716.81 00:16:04.176 ======================================================== 00:16:04.176 Total : 34193.40 133.57 3744.91 1127.64 8716.81 00:16:04.176 00:16:04.176 [2024-12-05 13:20:26.390314] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.176 13:20:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:04.176 [2024-12-05 13:20:26.603503] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.465 [2024-12-05 13:20:31.745953] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.465 Initializing NVMe Controllers 00:16:09.465 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:09.465 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:09.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:09.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:09.465 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:09.465 Initialization complete. Launching workers. 00:16:09.465 Starting thread on core 2 00:16:09.465 Starting thread on core 3 00:16:09.465 Starting thread on core 1 00:16:09.465 13:20:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:09.727 [2024-12-05 13:20:32.039333] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:13.034 [2024-12-05 13:20:35.131649] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:13.034 Initializing NVMe Controllers 00:16:13.034 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:13.034 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:13.034 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:13.034 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:13.034 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:13.034 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:13.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:13.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:13.034 Initialization complete. Launching workers. 00:16:13.034 Starting thread on core 1 with urgent priority queue 00:16:13.034 Starting thread on core 2 with urgent priority queue 00:16:13.034 Starting thread on core 3 with urgent priority queue 00:16:13.034 Starting thread on core 0 with urgent priority queue 00:16:13.034 SPDK bdev Controller (SPDK2 ) core 0: 12125.33 IO/s 8.25 secs/100000 ios 00:16:13.034 SPDK bdev Controller (SPDK2 ) core 1: 10561.33 IO/s 9.47 secs/100000 ios 00:16:13.034 SPDK bdev Controller (SPDK2 ) core 2: 14055.00 IO/s 7.11 secs/100000 ios 00:16:13.034 SPDK bdev Controller (SPDK2 ) core 3: 11019.67 IO/s 9.07 secs/100000 ios 00:16:13.034 ======================================================== 00:16:13.034 00:16:13.034 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:13.034 [2024-12-05 13:20:35.430307] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:13.034 Initializing NVMe Controllers 00:16:13.034 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:13.034 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:13.034 Namespace ID: 1 size: 0GB 00:16:13.034 Initialization complete. 00:16:13.034 INFO: using host memory buffer for IO 00:16:13.034 Hello world! 00:16:13.034 [2024-12-05 13:20:35.442371] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:13.034 13:20:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:13.294 [2024-12-05 13:20:35.736928] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.679 Initializing NVMe Controllers 00:16:14.679 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.679 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.679 Initialization complete. Launching workers. 00:16:14.679 submit (in ns) avg, min, max = 7868.9, 3890.0, 4005028.3 00:16:14.679 complete (in ns) avg, min, max = 19241.3, 2408.3, 4006233.3 00:16:14.679 00:16:14.679 Submit histogram 00:16:14.679 ================ 00:16:14.679 Range in us Cumulative Count 00:16:14.679 3.867 - 3.893: 0.0318% ( 6) 00:16:14.679 3.893 - 3.920: 1.9828% ( 368) 00:16:14.679 3.920 - 3.947: 6.8285% ( 914) 00:16:14.679 3.947 - 3.973: 14.2615% ( 1402) 00:16:14.679 3.973 - 4.000: 25.1140% ( 2047) 00:16:14.679 4.000 - 4.027: 37.2283% ( 2285) 00:16:14.679 4.027 - 4.053: 50.6733% ( 2536) 00:16:14.679 4.053 - 4.080: 67.4637% ( 3167) 00:16:14.679 4.080 - 4.107: 82.9445% ( 2920) 00:16:14.679 4.107 - 4.133: 92.2278% ( 1751) 00:16:14.679 4.133 - 4.160: 96.7872% ( 860) 00:16:14.679 4.160 - 4.187: 98.6216% ( 346) 00:16:14.679 4.187 - 4.213: 99.1676% ( 103) 00:16:14.679 4.213 - 4.240: 99.3479% ( 34) 00:16:14.679 4.240 - 4.267: 99.3956% ( 9) 00:16:14.679 4.267 - 4.293: 99.4433% ( 9) 00:16:14.679 4.293 - 4.320: 99.4698% ( 5) 00:16:14.679 4.320 - 4.347: 99.4804% ( 2) 00:16:14.679 4.347 - 4.373: 99.4857% ( 1) 00:16:14.679 4.373 - 4.400: 99.4910% ( 1) 00:16:14.679 4.453 - 4.480: 99.5122% ( 4) 00:16:14.679 4.560 - 4.587: 99.5175% ( 1) 00:16:14.679 4.613 - 4.640: 99.5229% ( 1) 00:16:14.679 4.693 - 4.720: 99.5282% ( 1) 00:16:14.679 4.853 - 4.880: 99.5335% ( 1) 00:16:14.679 4.933 - 4.960: 99.5494% ( 3) 00:16:14.679 5.067 - 5.093: 99.5547% ( 1) 00:16:14.679 5.120 - 5.147: 99.5600% ( 1) 00:16:14.679 5.253 - 5.280: 99.5653% ( 1) 00:16:14.679 5.387 - 5.413: 99.5706% ( 1) 00:16:14.679 5.467 - 5.493: 99.5759% ( 1) 00:16:14.679 5.760 - 5.787: 99.5812% ( 1) 00:16:14.679 5.867 - 5.893: 99.5865% ( 1) 00:16:14.679 5.920 - 5.947: 99.5918% ( 1) 00:16:14.679 5.973 - 6.000: 99.5971% ( 1) 00:16:14.679 6.000 - 6.027: 99.6024% ( 1) 00:16:14.680 6.053 - 6.080: 99.6077% ( 1) 00:16:14.680 6.107 - 6.133: 99.6183% ( 2) 00:16:14.680 6.133 - 6.160: 99.6289% ( 2) 00:16:14.680 6.160 - 6.187: 99.6342% ( 1) 00:16:14.680 6.187 - 6.213: 99.6448% ( 2) 00:16:14.680 6.267 - 6.293: 99.6501% ( 1) 00:16:14.680 6.347 - 6.373: 99.6554% ( 1) 00:16:14.680 6.400 - 6.427: 99.6713% ( 3) 00:16:14.680 6.453 - 6.480: 99.6766% ( 1) 00:16:14.680 6.480 - 6.507: 99.6819% ( 1) 00:16:14.680 6.507 - 6.533: 99.6872% ( 1) 00:16:14.680 6.613 - 6.640: 99.7031% ( 3) 00:16:14.680 6.640 - 6.667: 99.7084% ( 1) 00:16:14.680 6.693 - 6.720: 99.7137% ( 1) 00:16:14.680 6.720 - 6.747: 99.7190% ( 1) 00:16:14.680 6.747 - 6.773: 99.7243% ( 1) 00:16:14.680 6.800 - 6.827: 99.7296% ( 1) 00:16:14.680 6.827 - 6.880: 99.7455% ( 3) 00:16:14.680 6.880 - 6.933: 99.7614% ( 3) 00:16:14.680 6.933 - 6.987: 99.7720% ( 2) 00:16:14.680 6.987 - 7.040: 99.7773% ( 1) 00:16:14.680 7.040 - 7.093: 99.7932% ( 3) 00:16:14.680 7.093 - 7.147: 99.8038% ( 2) 00:16:14.680 7.200 - 7.253: 99.8144% ( 2) 00:16:14.680 7.360 - 7.413: 99.8250% ( 2) 00:16:14.680 7.413 - 7.467: 99.8356% ( 2) 00:16:14.680 7.467 - 7.520: 99.8463% ( 2) 00:16:14.680 7.520 - 7.573: 99.8569% ( 2) 00:16:14.680 7.573 - 7.627: 99.8675% ( 2) 00:16:14.680 7.627 - 7.680: 99.8728% ( 1) 00:16:14.680 7.787 - 7.840: 99.8834% ( 2) 00:16:14.680 7.947 - 8.000: 99.8887% ( 1) 00:16:14.680 8.160 - 8.213: 99.8940% ( 1) 00:16:14.680 8.480 - 8.533: 99.8993% ( 1) 00:16:14.680 9.173 - 9.227: 99.9046% ( 1) 00:16:14.680 3986.773 - 4014.080: 100.0000% ( 18) 00:16:14.680 00:16:14.680 Complete histogram 00:16:14.680 ================== 00:16:14.680 Range in us Cumulative Count 00:16:14.680 2.400 - 2.413: 0.0053% ( 1) 00:16:14.680 2.413 - 2.427: 0.8960% ( 168) 00:16:14.680 2.427 - 2.440: 1.2141% ( 60) 00:16:14.680 2.440 - 2.453: 1.2883% ( 14) 00:16:14.680 2.453 - 2.467: 1.4474% ( 30) 00:16:14.680 2.467 - [2024-12-05 13:20:36.831496] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.680 2.480: 46.9515% ( 8583) 00:16:14.680 2.480 - 2.493: 64.8977% ( 3385) 00:16:14.680 2.493 - 2.507: 74.4089% ( 1794) 00:16:14.680 2.507 - 2.520: 79.1326% ( 891) 00:16:14.680 2.520 - 2.533: 82.0168% ( 544) 00:16:14.680 2.533 - 2.547: 87.6895% ( 1070) 00:16:14.680 2.547 - 2.560: 93.6433% ( 1123) 00:16:14.680 2.560 - 2.573: 96.8402% ( 603) 00:16:14.680 2.573 - 2.587: 98.2982% ( 275) 00:16:14.680 2.587 - 2.600: 98.9927% ( 131) 00:16:14.680 2.600 - 2.613: 99.2684% ( 52) 00:16:14.680 2.613 - 2.627: 99.3214% ( 10) 00:16:14.680 2.627 - 2.640: 99.3320% ( 2) 00:16:14.680 2.640 - 2.653: 99.3373% ( 1) 00:16:14.680 2.653 - 2.667: 99.3426% ( 1) 00:16:14.680 2.667 - 2.680: 99.3479% ( 1) 00:16:14.680 2.733 - 2.747: 99.3532% ( 1) 00:16:14.680 4.693 - 4.720: 99.3585% ( 1) 00:16:14.680 4.747 - 4.773: 99.3638% ( 1) 00:16:14.680 4.773 - 4.800: 99.3691% ( 1) 00:16:14.680 4.800 - 4.827: 99.3744% ( 1) 00:16:14.680 4.907 - 4.933: 99.3850% ( 2) 00:16:14.680 4.933 - 4.960: 99.4009% ( 3) 00:16:14.680 5.040 - 5.067: 99.4115% ( 2) 00:16:14.680 5.067 - 5.093: 99.4221% ( 2) 00:16:14.680 5.120 - 5.147: 99.4274% ( 1) 00:16:14.680 5.173 - 5.200: 99.4327% ( 1) 00:16:14.680 5.280 - 5.307: 99.4380% ( 1) 00:16:14.680 5.307 - 5.333: 99.4592% ( 4) 00:16:14.680 5.387 - 5.413: 99.4645% ( 1) 00:16:14.680 5.440 - 5.467: 99.4698% ( 1) 00:16:14.680 5.467 - 5.493: 99.4804% ( 2) 00:16:14.680 5.493 - 5.520: 99.4857% ( 1) 00:16:14.680 5.600 - 5.627: 99.4910% ( 1) 00:16:14.680 5.627 - 5.653: 99.4963% ( 1) 00:16:14.680 5.653 - 5.680: 99.5016% ( 1) 00:16:14.680 5.680 - 5.707: 99.5069% ( 1) 00:16:14.680 5.760 - 5.787: 99.5122% ( 1) 00:16:14.680 5.813 - 5.840: 99.5229% ( 2) 00:16:14.680 5.867 - 5.893: 99.5335% ( 2) 00:16:14.680 5.893 - 5.920: 99.5388% ( 1) 00:16:14.680 6.080 - 6.107: 99.5441% ( 1) 00:16:14.680 6.107 - 6.133: 99.5494% ( 1) 00:16:14.680 6.240 - 6.267: 99.5547% ( 1) 00:16:14.680 6.427 - 6.453: 99.5600% ( 1) 00:16:14.680 7.307 - 7.360: 99.5653% ( 1) 00:16:14.680 13.653 - 13.760: 99.5706% ( 1) 00:16:14.680 36.053 - 36.267: 99.5759% ( 1) 00:16:14.680 56.747 - 57.173: 99.5812% ( 1) 00:16:14.680 3986.773 - 4014.080: 100.0000% ( 79) 00:16:14.680 00:16:14.680 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:14.680 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:14.680 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:14.680 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:14.680 13:20:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:14.680 [ 00:16:14.680 { 00:16:14.680 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:14.680 "subtype": "Discovery", 00:16:14.680 "listen_addresses": [], 00:16:14.680 "allow_any_host": true, 00:16:14.680 "hosts": [] 00:16:14.680 }, 00:16:14.680 { 00:16:14.680 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:14.680 "subtype": "NVMe", 00:16:14.680 "listen_addresses": [ 00:16:14.680 { 00:16:14.680 "trtype": "VFIOUSER", 00:16:14.680 "adrfam": "IPv4", 00:16:14.680 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:14.680 "trsvcid": "0" 00:16:14.680 } 00:16:14.680 ], 00:16:14.680 "allow_any_host": true, 00:16:14.680 "hosts": [], 00:16:14.680 "serial_number": "SPDK1", 00:16:14.680 "model_number": "SPDK bdev Controller", 00:16:14.680 "max_namespaces": 32, 00:16:14.680 "min_cntlid": 1, 00:16:14.680 "max_cntlid": 65519, 00:16:14.680 "namespaces": [ 00:16:14.680 { 00:16:14.680 "nsid": 1, 00:16:14.680 "bdev_name": "Malloc1", 00:16:14.680 "name": "Malloc1", 00:16:14.680 "nguid": "857FE08C5095462EBBDAA04EED535CE9", 00:16:14.680 "uuid": "857fe08c-5095-462e-bbda-a04eed535ce9" 00:16:14.680 }, 00:16:14.680 { 00:16:14.680 "nsid": 2, 00:16:14.680 "bdev_name": "Malloc3", 00:16:14.680 "name": "Malloc3", 00:16:14.680 "nguid": "420D4242FE584D46BDFA0E46A5B89BB4", 00:16:14.680 "uuid": "420d4242-fe58-4d46-bdfa-0e46a5b89bb4" 00:16:14.680 } 00:16:14.680 ] 00:16:14.680 }, 00:16:14.680 { 00:16:14.680 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:14.680 "subtype": "NVMe", 00:16:14.680 "listen_addresses": [ 00:16:14.680 { 00:16:14.680 "trtype": "VFIOUSER", 00:16:14.680 "adrfam": "IPv4", 00:16:14.680 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:14.680 "trsvcid": "0" 00:16:14.680 } 00:16:14.680 ], 00:16:14.680 "allow_any_host": true, 00:16:14.680 "hosts": [], 00:16:14.680 "serial_number": "SPDK2", 00:16:14.680 "model_number": "SPDK bdev Controller", 00:16:14.680 "max_namespaces": 32, 00:16:14.680 "min_cntlid": 1, 00:16:14.680 "max_cntlid": 65519, 00:16:14.680 "namespaces": [ 00:16:14.680 { 00:16:14.680 "nsid": 1, 00:16:14.680 "bdev_name": "Malloc2", 00:16:14.680 "name": "Malloc2", 00:16:14.680 "nguid": "98C2BEF69DEB4EAEBE1A4C5E0C38ED79", 00:16:14.680 "uuid": "98c2bef6-9deb-4eae-be1a-4c5e0c38ed79" 00:16:14.680 } 00:16:14.680 ] 00:16:14.680 } 00:16:14.680 ] 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=888218 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:14.680 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:14.942 Malloc4 00:16:14.942 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:14.942 [2024-12-05 13:20:37.269254] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.942 [2024-12-05 13:20:37.417241] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.942 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:14.942 Asynchronous Event Request test 00:16:14.942 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.942 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:14.942 Registering asynchronous event callbacks... 00:16:14.942 Starting namespace attribute notice tests for all controllers... 00:16:14.942 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:14.942 aer_cb - Changed Namespace 00:16:14.942 Cleaning up... 00:16:15.203 [ 00:16:15.203 { 00:16:15.203 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:15.203 "subtype": "Discovery", 00:16:15.203 "listen_addresses": [], 00:16:15.203 "allow_any_host": true, 00:16:15.203 "hosts": [] 00:16:15.203 }, 00:16:15.203 { 00:16:15.203 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:15.203 "subtype": "NVMe", 00:16:15.203 "listen_addresses": [ 00:16:15.203 { 00:16:15.203 "trtype": "VFIOUSER", 00:16:15.203 "adrfam": "IPv4", 00:16:15.203 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:15.203 "trsvcid": "0" 00:16:15.203 } 00:16:15.203 ], 00:16:15.203 "allow_any_host": true, 00:16:15.203 "hosts": [], 00:16:15.203 "serial_number": "SPDK1", 00:16:15.203 "model_number": "SPDK bdev Controller", 00:16:15.203 "max_namespaces": 32, 00:16:15.203 "min_cntlid": 1, 00:16:15.203 "max_cntlid": 65519, 00:16:15.203 "namespaces": [ 00:16:15.203 { 00:16:15.203 "nsid": 1, 00:16:15.203 "bdev_name": "Malloc1", 00:16:15.203 "name": "Malloc1", 00:16:15.203 "nguid": "857FE08C5095462EBBDAA04EED535CE9", 00:16:15.203 "uuid": "857fe08c-5095-462e-bbda-a04eed535ce9" 00:16:15.203 }, 00:16:15.203 { 00:16:15.203 "nsid": 2, 00:16:15.203 "bdev_name": "Malloc3", 00:16:15.203 "name": "Malloc3", 00:16:15.203 "nguid": "420D4242FE584D46BDFA0E46A5B89BB4", 00:16:15.203 "uuid": "420d4242-fe58-4d46-bdfa-0e46a5b89bb4" 00:16:15.203 } 00:16:15.203 ] 00:16:15.203 }, 00:16:15.203 { 00:16:15.203 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:15.203 "subtype": "NVMe", 00:16:15.203 "listen_addresses": [ 00:16:15.203 { 00:16:15.203 "trtype": "VFIOUSER", 00:16:15.203 "adrfam": "IPv4", 00:16:15.203 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:15.203 "trsvcid": "0" 00:16:15.203 } 00:16:15.203 ], 00:16:15.203 "allow_any_host": true, 00:16:15.203 "hosts": [], 00:16:15.203 "serial_number": "SPDK2", 00:16:15.203 "model_number": "SPDK bdev Controller", 00:16:15.203 "max_namespaces": 32, 00:16:15.203 "min_cntlid": 1, 00:16:15.203 "max_cntlid": 65519, 00:16:15.203 "namespaces": [ 00:16:15.203 { 00:16:15.203 "nsid": 1, 00:16:15.203 "bdev_name": "Malloc2", 00:16:15.203 "name": "Malloc2", 00:16:15.203 "nguid": "98C2BEF69DEB4EAEBE1A4C5E0C38ED79", 00:16:15.203 "uuid": "98c2bef6-9deb-4eae-be1a-4c5e0c38ed79" 00:16:15.203 }, 00:16:15.203 { 00:16:15.203 "nsid": 2, 00:16:15.203 "bdev_name": "Malloc4", 00:16:15.203 "name": "Malloc4", 00:16:15.203 "nguid": "BE6F771DC3DE4A55B8A5A4FB75F113F4", 00:16:15.203 "uuid": "be6f771d-c3de-4a55-b8a5-a4fb75f113f4" 00:16:15.203 } 00:16:15.203 ] 00:16:15.203 } 00:16:15.203 ] 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 888218 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 879122 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 879122 ']' 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 879122 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 879122 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 879122' 00:16:15.203 killing process with pid 879122 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 879122 00:16:15.203 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 879122 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=888237 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 888237' 00:16:15.465 Process pid: 888237 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 888237 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 888237 ']' 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.465 13:20:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:15.465 [2024-12-05 13:20:37.911174] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:15.465 [2024-12-05 13:20:37.912129] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:16:15.465 [2024-12-05 13:20:37.912174] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.465 [2024-12-05 13:20:37.995352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:15.465 [2024-12-05 13:20:38.030805] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.465 [2024-12-05 13:20:38.030839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.465 [2024-12-05 13:20:38.030847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.465 [2024-12-05 13:20:38.030854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.465 [2024-12-05 13:20:38.030868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.726 [2024-12-05 13:20:38.033898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.726 [2024-12-05 13:20:38.034158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.726 [2024-12-05 13:20:38.034314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:15.726 [2024-12-05 13:20:38.034315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.726 [2024-12-05 13:20:38.090431] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:15.726 [2024-12-05 13:20:38.090510] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:15.726 [2024-12-05 13:20:38.090910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:15.726 [2024-12-05 13:20:38.091488] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:15.726 [2024-12-05 13:20:38.091497] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:15.726 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:15.726 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:15.726 13:20:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:16.669 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:16.930 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:16.930 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:16.930 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:16.930 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:16.930 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:16.930 Malloc1 00:16:17.192 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:17.192 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:17.453 13:20:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:17.714 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:17.714 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:17.714 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:17.975 Malloc2 00:16:17.975 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:17.975 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:18.236 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 888237 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 888237 ']' 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 888237 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 888237 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 888237' 00:16:18.497 killing process with pid 888237 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 888237 00:16:18.497 13:20:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 888237 00:16:18.497 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:18.757 00:16:18.757 real 0m50.894s 00:16:18.757 user 3m17.114s 00:16:18.757 sys 0m2.781s 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:18.757 ************************************ 00:16:18.757 END TEST nvmf_vfio_user 00:16:18.757 ************************************ 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.757 ************************************ 00:16:18.757 START TEST nvmf_vfio_user_nvme_compliance 00:16:18.757 ************************************ 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:18.757 * Looking for test storage... 00:16:18.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.757 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:19.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.018 --rc genhtml_branch_coverage=1 00:16:19.018 --rc genhtml_function_coverage=1 00:16:19.018 --rc genhtml_legend=1 00:16:19.018 --rc geninfo_all_blocks=1 00:16:19.018 --rc geninfo_unexecuted_blocks=1 00:16:19.018 00:16:19.018 ' 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:19.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.018 --rc genhtml_branch_coverage=1 00:16:19.018 --rc genhtml_function_coverage=1 00:16:19.018 --rc genhtml_legend=1 00:16:19.018 --rc geninfo_all_blocks=1 00:16:19.018 --rc geninfo_unexecuted_blocks=1 00:16:19.018 00:16:19.018 ' 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:19.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.018 --rc genhtml_branch_coverage=1 00:16:19.018 --rc genhtml_function_coverage=1 00:16:19.018 --rc genhtml_legend=1 00:16:19.018 --rc geninfo_all_blocks=1 00:16:19.018 --rc geninfo_unexecuted_blocks=1 00:16:19.018 00:16:19.018 ' 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:19.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:19.018 --rc genhtml_branch_coverage=1 00:16:19.018 --rc genhtml_function_coverage=1 00:16:19.018 --rc genhtml_legend=1 00:16:19.018 --rc geninfo_all_blocks=1 00:16:19.018 --rc geninfo_unexecuted_blocks=1 00:16:19.018 00:16:19.018 ' 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.018 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:19.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=889001 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 889001' 00:16:19.019 Process pid: 889001 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 889001 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 889001 ']' 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:19.019 13:20:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:19.019 [2024-12-05 13:20:41.424333] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:16:19.019 [2024-12-05 13:20:41.424411] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.019 [2024-12-05 13:20:41.508070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:19.019 [2024-12-05 13:20:41.549405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.019 [2024-12-05 13:20:41.549444] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.019 [2024-12-05 13:20:41.549452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.019 [2024-12-05 13:20:41.549459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.019 [2024-12-05 13:20:41.549465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.019 [2024-12-05 13:20:41.551083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.019 [2024-12-05 13:20:41.551257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.019 [2024-12-05 13:20:41.551261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.957 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.957 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:19.957 13:20:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.897 malloc0 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.897 13:20:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:21.198 00:16:21.198 00:16:21.198 CUnit - A unit testing framework for C - Version 2.1-3 00:16:21.198 http://cunit.sourceforge.net/ 00:16:21.198 00:16:21.198 00:16:21.198 Suite: nvme_compliance 00:16:21.198 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 13:20:43.529327] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.198 [2024-12-05 13:20:43.530689] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:21.198 [2024-12-05 13:20:43.530700] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:21.198 [2024-12-05 13:20:43.530705] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:21.198 [2024-12-05 13:20:43.532343] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.198 passed 00:16:21.198 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 13:20:43.627922] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.198 [2024-12-05 13:20:43.630935] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.198 passed 00:16:21.198 Test: admin_identify_ns ...[2024-12-05 13:20:43.726110] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.458 [2024-12-05 13:20:43.789875] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:21.458 [2024-12-05 13:20:43.797890] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:21.458 [2024-12-05 13:20:43.818999] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.458 passed 00:16:21.458 Test: admin_get_features_mandatory_features ...[2024-12-05 13:20:43.910621] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.458 [2024-12-05 13:20:43.913645] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.458 passed 00:16:21.458 Test: admin_get_features_optional_features ...[2024-12-05 13:20:44.007197] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.458 [2024-12-05 13:20:44.010215] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.718 passed 00:16:21.718 Test: admin_set_features_number_of_queues ...[2024-12-05 13:20:44.104344] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.718 [2024-12-05 13:20:44.208965] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.718 passed 00:16:21.979 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 13:20:44.300586] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.979 [2024-12-05 13:20:44.303605] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.979 passed 00:16:21.979 Test: admin_get_log_page_with_lpo ...[2024-12-05 13:20:44.397683] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.979 [2024-12-05 13:20:44.464871] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:21.979 [2024-12-05 13:20:44.477928] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.979 passed 00:16:22.239 Test: fabric_property_get ...[2024-12-05 13:20:44.569544] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.239 [2024-12-05 13:20:44.570804] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:22.239 [2024-12-05 13:20:44.572563] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.239 passed 00:16:22.239 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 13:20:44.665102] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.239 [2024-12-05 13:20:44.666361] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:22.239 [2024-12-05 13:20:44.668119] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.239 passed 00:16:22.239 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 13:20:44.762110] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.500 [2024-12-05 13:20:44.845870] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:22.500 [2024-12-05 13:20:44.861868] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:22.500 [2024-12-05 13:20:44.866959] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.500 passed 00:16:22.500 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 13:20:44.960998] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.500 [2024-12-05 13:20:44.962238] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:22.500 [2024-12-05 13:20:44.964017] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.500 passed 00:16:22.500 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 13:20:45.057111] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.761 [2024-12-05 13:20:45.127874] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:22.761 [2024-12-05 13:20:45.151875] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:22.761 [2024-12-05 13:20:45.156964] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.761 passed 00:16:22.761 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 13:20:45.246574] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:22.761 [2024-12-05 13:20:45.247828] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:22.761 [2024-12-05 13:20:45.247851] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:22.761 [2024-12-05 13:20:45.249592] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:22.761 passed 00:16:23.027 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 13:20:45.342698] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.027 [2024-12-05 13:20:45.433878] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:23.027 [2024-12-05 13:20:45.441869] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:23.027 [2024-12-05 13:20:45.449870] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:23.027 [2024-12-05 13:20:45.457870] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:23.027 [2024-12-05 13:20:45.486946] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.027 passed 00:16:23.027 Test: admin_create_io_sq_verify_pc ...[2024-12-05 13:20:45.580543] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:23.287 [2024-12-05 13:20:45.594876] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:23.287 [2024-12-05 13:20:45.612699] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:23.287 passed 00:16:23.287 Test: admin_create_io_qp_max_qps ...[2024-12-05 13:20:45.707252] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.669 [2024-12-05 13:20:46.795875] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:24.669 [2024-12-05 13:20:47.187689] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.669 passed 00:16:24.928 Test: admin_create_io_sq_shared_cq ...[2024-12-05 13:20:47.279811] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:24.929 [2024-12-05 13:20:47.412872] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:24.929 [2024-12-05 13:20:47.449924] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:24.929 passed 00:16:24.929 00:16:24.929 Run Summary: Type Total Ran Passed Failed Inactive 00:16:24.929 suites 1 1 n/a 0 0 00:16:24.929 tests 18 18 18 0 0 00:16:24.929 asserts 360 360 360 0 n/a 00:16:24.929 00:16:24.929 Elapsed time = 1.642 seconds 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 889001 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 889001 ']' 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 889001 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889001 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889001' 00:16:25.189 killing process with pid 889001 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 889001 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 889001 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:25.189 00:16:25.189 real 0m6.566s 00:16:25.189 user 0m18.626s 00:16:25.189 sys 0m0.550s 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:25.189 ************************************ 00:16:25.189 END TEST nvmf_vfio_user_nvme_compliance 00:16:25.189 ************************************ 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.189 13:20:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.450 ************************************ 00:16:25.450 START TEST nvmf_vfio_user_fuzz 00:16:25.450 ************************************ 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:25.450 * Looking for test storage... 00:16:25.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.450 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:25.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.451 --rc genhtml_branch_coverage=1 00:16:25.451 --rc genhtml_function_coverage=1 00:16:25.451 --rc genhtml_legend=1 00:16:25.451 --rc geninfo_all_blocks=1 00:16:25.451 --rc geninfo_unexecuted_blocks=1 00:16:25.451 00:16:25.451 ' 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:25.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.451 --rc genhtml_branch_coverage=1 00:16:25.451 --rc genhtml_function_coverage=1 00:16:25.451 --rc genhtml_legend=1 00:16:25.451 --rc geninfo_all_blocks=1 00:16:25.451 --rc geninfo_unexecuted_blocks=1 00:16:25.451 00:16:25.451 ' 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:25.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.451 --rc genhtml_branch_coverage=1 00:16:25.451 --rc genhtml_function_coverage=1 00:16:25.451 --rc genhtml_legend=1 00:16:25.451 --rc geninfo_all_blocks=1 00:16:25.451 --rc geninfo_unexecuted_blocks=1 00:16:25.451 00:16:25.451 ' 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:25.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.451 --rc genhtml_branch_coverage=1 00:16:25.451 --rc genhtml_function_coverage=1 00:16:25.451 --rc genhtml_legend=1 00:16:25.451 --rc geninfo_all_blocks=1 00:16:25.451 --rc geninfo_unexecuted_blocks=1 00:16:25.451 00:16:25.451 ' 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:25.451 13:20:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=890385 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 890385' 00:16:25.451 Process pid: 890385 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 890385 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 890385 ']' 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.451 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:26.394 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.394 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:26.394 13:20:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.778 malloc0 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.778 13:20:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:27.778 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.778 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:27.778 13:20:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:00.012 Fuzzing completed. Shutting down the fuzz application 00:17:00.012 00:17:00.012 Dumping successful admin opcodes: 00:17:00.012 9, 10, 00:17:00.012 Dumping successful io opcodes: 00:17:00.012 0, 00:17:00.012 NS: 0x20000081ef00 I/O qp, Total commands completed: 1129974, total successful commands: 4447, random_seed: 400712640 00:17:00.012 NS: 0x20000081ef00 admin qp, Total commands completed: 143216, total successful commands: 32, random_seed: 3369622784 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 890385 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 890385 ']' 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 890385 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 890385 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 890385' 00:17:00.012 killing process with pid 890385 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 890385 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 890385 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:00.012 00:17:00.012 real 0m33.875s 00:17:00.012 user 0m37.980s 00:17:00.012 sys 0m26.692s 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:00.012 ************************************ 00:17:00.012 END TEST nvmf_vfio_user_fuzz 00:17:00.012 ************************************ 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.012 13:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.013 ************************************ 00:17:00.013 START TEST nvmf_auth_target 00:17:00.013 ************************************ 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:00.013 * Looking for test storage... 00:17:00.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:00.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.013 --rc genhtml_branch_coverage=1 00:17:00.013 --rc genhtml_function_coverage=1 00:17:00.013 --rc genhtml_legend=1 00:17:00.013 --rc geninfo_all_blocks=1 00:17:00.013 --rc geninfo_unexecuted_blocks=1 00:17:00.013 00:17:00.013 ' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:00.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.013 --rc genhtml_branch_coverage=1 00:17:00.013 --rc genhtml_function_coverage=1 00:17:00.013 --rc genhtml_legend=1 00:17:00.013 --rc geninfo_all_blocks=1 00:17:00.013 --rc geninfo_unexecuted_blocks=1 00:17:00.013 00:17:00.013 ' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:00.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.013 --rc genhtml_branch_coverage=1 00:17:00.013 --rc genhtml_function_coverage=1 00:17:00.013 --rc genhtml_legend=1 00:17:00.013 --rc geninfo_all_blocks=1 00:17:00.013 --rc geninfo_unexecuted_blocks=1 00:17:00.013 00:17:00.013 ' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:00.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.013 --rc genhtml_branch_coverage=1 00:17:00.013 --rc genhtml_function_coverage=1 00:17:00.013 --rc genhtml_legend=1 00:17:00.013 --rc geninfo_all_blocks=1 00:17:00.013 --rc geninfo_unexecuted_blocks=1 00:17:00.013 00:17:00.013 ' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.013 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:00.014 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:00.014 13:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:08.199 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:08.199 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:08.199 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:08.200 Found net devices under 0000:31:00.0: cvl_0_0 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:08.200 Found net devices under 0000:31:00.1: cvl_0_1 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:08.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.674 ms 00:17:08.200 00:17:08.200 --- 10.0.0.2 ping statistics --- 00:17:08.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.200 rtt min/avg/max/mdev = 0.674/0.674/0.674/0.000 ms 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:08.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:17:08.200 00:17:08.200 --- 10.0.0.1 ping statistics --- 00:17:08.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.200 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=901773 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 901773 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 901773 ']' 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.200 13:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.769 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.769 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:08.769 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.769 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.769 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.769 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.769 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=901974 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=880550d881257dabcfb20af20e78fac466d01bf001537cab 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yHN 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 880550d881257dabcfb20af20e78fac466d01bf001537cab 0 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 880550d881257dabcfb20af20e78fac466d01bf001537cab 0 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=880550d881257dabcfb20af20e78fac466d01bf001537cab 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:08.770 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yHN 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yHN 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.yHN 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c0e0980ed651919727fc6c994fe50f598c8b55273eec38d64a946cdec7c25e46 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uIr 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c0e0980ed651919727fc6c994fe50f598c8b55273eec38d64a946cdec7c25e46 3 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c0e0980ed651919727fc6c994fe50f598c8b55273eec38d64a946cdec7c25e46 3 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c0e0980ed651919727fc6c994fe50f598c8b55273eec38d64a946cdec7c25e46 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uIr 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uIr 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.uIr 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a6c9038aee031a1919490ade4a8cafa2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.QUf 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a6c9038aee031a1919490ade4a8cafa2 1 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a6c9038aee031a1919490ade4a8cafa2 1 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a6c9038aee031a1919490ade4a8cafa2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.QUf 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.QUf 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.QUf 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a97af482adb94468456ee6336c46a8eac72854ce7c03e48a 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.k67 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a97af482adb94468456ee6336c46a8eac72854ce7c03e48a 2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a97af482adb94468456ee6336c46a8eac72854ce7c03e48a 2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a97af482adb94468456ee6336c46a8eac72854ce7c03e48a 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.k67 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.k67 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.k67 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=42524c2a17bf8855ff6ec82354428534fe90c0402ccbf492 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.rlW 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 42524c2a17bf8855ff6ec82354428534fe90c0402ccbf492 2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 42524c2a17bf8855ff6ec82354428534fe90c0402ccbf492 2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=42524c2a17bf8855ff6ec82354428534fe90c0402ccbf492 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:09.031 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.rlW 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.rlW 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.rlW 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5b0030b4d58dbde6b9d940bf190ad9ca 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.nW7 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5b0030b4d58dbde6b9d940bf190ad9ca 1 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5b0030b4d58dbde6b9d940bf190ad9ca 1 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5b0030b4d58dbde6b9d940bf190ad9ca 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.nW7 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.nW7 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.nW7 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6ee8e841679ebffb9758c70ddd8569999d74ccdb76d3c47bca435e55c7089627 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.EM6 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6ee8e841679ebffb9758c70ddd8569999d74ccdb76d3c47bca435e55c7089627 3 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6ee8e841679ebffb9758c70ddd8569999d74ccdb76d3c47bca435e55c7089627 3 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6ee8e841679ebffb9758c70ddd8569999d74ccdb76d3c47bca435e55c7089627 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.EM6 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.EM6 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.EM6 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 901773 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 901773 ']' 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.292 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 901974 /var/tmp/host.sock 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 901974 ']' 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:09.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.554 13:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yHN 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.yHN 00:17:09.554 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.yHN 00:17:09.815 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.uIr ]] 00:17:09.815 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uIr 00:17:09.815 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.815 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.815 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.815 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uIr 00:17:09.815 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uIr 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QUf 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.QUf 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.QUf 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.k67 ]] 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k67 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k67 00:17:10.075 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k67 00:17:10.336 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:10.336 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rlW 00:17:10.336 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.336 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.336 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.336 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.rlW 00:17:10.336 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.rlW 00:17:10.597 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.nW7 ]] 00:17:10.597 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nW7 00:17:10.597 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.597 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.597 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.597 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nW7 00:17:10.597 13:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nW7 00:17:10.597 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:10.597 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EM6 00:17:10.597 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.597 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.597 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.597 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.EM6 00:17:10.597 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.EM6 00:17:10.857 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.858 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.119 00:17:11.119 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.119 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.119 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.380 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.380 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.380 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.380 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.380 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.380 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.380 { 00:17:11.380 "cntlid": 1, 00:17:11.380 "qid": 0, 00:17:11.380 "state": "enabled", 00:17:11.380 "thread": "nvmf_tgt_poll_group_000", 00:17:11.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:11.380 "listen_address": { 00:17:11.380 "trtype": "TCP", 00:17:11.380 "adrfam": "IPv4", 00:17:11.380 "traddr": "10.0.0.2", 00:17:11.380 "trsvcid": "4420" 00:17:11.380 }, 00:17:11.381 "peer_address": { 00:17:11.381 "trtype": "TCP", 00:17:11.381 "adrfam": "IPv4", 00:17:11.381 "traddr": "10.0.0.1", 00:17:11.381 "trsvcid": "52368" 00:17:11.381 }, 00:17:11.381 "auth": { 00:17:11.381 "state": "completed", 00:17:11.381 "digest": "sha256", 00:17:11.381 "dhgroup": "null" 00:17:11.381 } 00:17:11.381 } 00:17:11.381 ]' 00:17:11.381 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.381 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.381 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.381 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.381 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.642 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.642 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.642 13:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.642 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:11.642 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.585 13:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.585 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.845 00:17:12.845 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.845 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.845 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.105 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.105 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.105 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.105 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.105 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.105 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.105 { 00:17:13.105 "cntlid": 3, 00:17:13.105 "qid": 0, 00:17:13.105 "state": "enabled", 00:17:13.105 "thread": "nvmf_tgt_poll_group_000", 00:17:13.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:13.105 "listen_address": { 00:17:13.105 "trtype": "TCP", 00:17:13.105 "adrfam": "IPv4", 00:17:13.105 "traddr": "10.0.0.2", 00:17:13.105 "trsvcid": "4420" 00:17:13.105 }, 00:17:13.106 "peer_address": { 00:17:13.106 "trtype": "TCP", 00:17:13.106 "adrfam": "IPv4", 00:17:13.106 "traddr": "10.0.0.1", 00:17:13.106 "trsvcid": "52394" 00:17:13.106 }, 00:17:13.106 "auth": { 00:17:13.106 "state": "completed", 00:17:13.106 "digest": "sha256", 00:17:13.106 "dhgroup": "null" 00:17:13.106 } 00:17:13.106 } 00:17:13.106 ]' 00:17:13.106 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.106 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.106 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.106 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.106 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.366 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.366 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.366 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.366 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:13.366 13:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.308 13:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.568 00:17:14.568 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.568 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.568 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.829 { 00:17:14.829 "cntlid": 5, 00:17:14.829 "qid": 0, 00:17:14.829 "state": "enabled", 00:17:14.829 "thread": "nvmf_tgt_poll_group_000", 00:17:14.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:14.829 "listen_address": { 00:17:14.829 "trtype": "TCP", 00:17:14.829 "adrfam": "IPv4", 00:17:14.829 "traddr": "10.0.0.2", 00:17:14.829 "trsvcid": "4420" 00:17:14.829 }, 00:17:14.829 "peer_address": { 00:17:14.829 "trtype": "TCP", 00:17:14.829 "adrfam": "IPv4", 00:17:14.829 "traddr": "10.0.0.1", 00:17:14.829 "trsvcid": "52414" 00:17:14.829 }, 00:17:14.829 "auth": { 00:17:14.829 "state": "completed", 00:17:14.829 "digest": "sha256", 00:17:14.829 "dhgroup": "null" 00:17:14.829 } 00:17:14.829 } 00:17:14.829 ]' 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:14.829 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.091 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.091 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.091 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.091 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:15.091 13:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.033 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.311 00:17:16.311 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.311 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.311 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.571 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.571 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.571 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.571 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.571 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.571 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.571 { 00:17:16.571 "cntlid": 7, 00:17:16.571 "qid": 0, 00:17:16.571 "state": "enabled", 00:17:16.571 "thread": "nvmf_tgt_poll_group_000", 00:17:16.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:16.571 "listen_address": { 00:17:16.571 "trtype": "TCP", 00:17:16.571 "adrfam": "IPv4", 00:17:16.571 "traddr": "10.0.0.2", 00:17:16.571 "trsvcid": "4420" 00:17:16.571 }, 00:17:16.571 "peer_address": { 00:17:16.571 "trtype": "TCP", 00:17:16.571 "adrfam": "IPv4", 00:17:16.571 "traddr": "10.0.0.1", 00:17:16.571 "trsvcid": "52428" 00:17:16.571 }, 00:17:16.571 "auth": { 00:17:16.571 "state": "completed", 00:17:16.571 "digest": "sha256", 00:17:16.571 "dhgroup": "null" 00:17:16.571 } 00:17:16.571 } 00:17:16.571 ]' 00:17:16.571 13:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.571 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:16.571 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.571 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.571 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.571 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.571 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.571 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.831 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:16.831 13:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:17.771 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.771 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:17.771 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.771 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.771 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.771 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.771 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:17.772 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.032 00:17:18.032 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.032 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.032 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.292 { 00:17:18.292 "cntlid": 9, 00:17:18.292 "qid": 0, 00:17:18.292 "state": "enabled", 00:17:18.292 "thread": "nvmf_tgt_poll_group_000", 00:17:18.292 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:18.292 "listen_address": { 00:17:18.292 "trtype": "TCP", 00:17:18.292 "adrfam": "IPv4", 00:17:18.292 "traddr": "10.0.0.2", 00:17:18.292 "trsvcid": "4420" 00:17:18.292 }, 00:17:18.292 "peer_address": { 00:17:18.292 "trtype": "TCP", 00:17:18.292 "adrfam": "IPv4", 00:17:18.292 "traddr": "10.0.0.1", 00:17:18.292 "trsvcid": "52454" 00:17:18.292 }, 00:17:18.292 "auth": { 00:17:18.292 "state": "completed", 00:17:18.292 "digest": "sha256", 00:17:18.292 "dhgroup": "ffdhe2048" 00:17:18.292 } 00:17:18.292 } 00:17:18.292 ]' 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.292 13:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.552 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:18.552 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.491 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.492 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.492 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.492 13:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.752 00:17:19.752 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.752 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.752 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.011 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.011 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.011 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.012 { 00:17:20.012 "cntlid": 11, 00:17:20.012 "qid": 0, 00:17:20.012 "state": "enabled", 00:17:20.012 "thread": "nvmf_tgt_poll_group_000", 00:17:20.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:20.012 "listen_address": { 00:17:20.012 "trtype": "TCP", 00:17:20.012 "adrfam": "IPv4", 00:17:20.012 "traddr": "10.0.0.2", 00:17:20.012 "trsvcid": "4420" 00:17:20.012 }, 00:17:20.012 "peer_address": { 00:17:20.012 "trtype": "TCP", 00:17:20.012 "adrfam": "IPv4", 00:17:20.012 "traddr": "10.0.0.1", 00:17:20.012 "trsvcid": "41996" 00:17:20.012 }, 00:17:20.012 "auth": { 00:17:20.012 "state": "completed", 00:17:20.012 "digest": "sha256", 00:17:20.012 "dhgroup": "ffdhe2048" 00:17:20.012 } 00:17:20.012 } 00:17:20.012 ]' 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.012 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.273 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:20.273 13:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:21.214 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.215 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.476 00:17:21.476 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.476 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.476 13:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.737 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.738 { 00:17:21.738 "cntlid": 13, 00:17:21.738 "qid": 0, 00:17:21.738 "state": "enabled", 00:17:21.738 "thread": "nvmf_tgt_poll_group_000", 00:17:21.738 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:21.738 "listen_address": { 00:17:21.738 "trtype": "TCP", 00:17:21.738 "adrfam": "IPv4", 00:17:21.738 "traddr": "10.0.0.2", 00:17:21.738 "trsvcid": "4420" 00:17:21.738 }, 00:17:21.738 "peer_address": { 00:17:21.738 "trtype": "TCP", 00:17:21.738 "adrfam": "IPv4", 00:17:21.738 "traddr": "10.0.0.1", 00:17:21.738 "trsvcid": "42022" 00:17:21.738 }, 00:17:21.738 "auth": { 00:17:21.738 "state": "completed", 00:17:21.738 "digest": "sha256", 00:17:21.738 "dhgroup": "ffdhe2048" 00:17:21.738 } 00:17:21.738 } 00:17:21.738 ]' 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.738 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.998 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:21.998 13:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:22.937 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.197 00:17:23.197 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.197 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.197 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.458 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.458 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.458 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.458 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.458 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.459 { 00:17:23.459 "cntlid": 15, 00:17:23.459 "qid": 0, 00:17:23.459 "state": "enabled", 00:17:23.459 "thread": "nvmf_tgt_poll_group_000", 00:17:23.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:23.459 "listen_address": { 00:17:23.459 "trtype": "TCP", 00:17:23.459 "adrfam": "IPv4", 00:17:23.459 "traddr": "10.0.0.2", 00:17:23.459 "trsvcid": "4420" 00:17:23.459 }, 00:17:23.459 "peer_address": { 00:17:23.459 "trtype": "TCP", 00:17:23.459 "adrfam": "IPv4", 00:17:23.459 "traddr": "10.0.0.1", 00:17:23.459 "trsvcid": "42042" 00:17:23.459 }, 00:17:23.459 "auth": { 00:17:23.459 "state": "completed", 00:17:23.459 "digest": "sha256", 00:17:23.459 "dhgroup": "ffdhe2048" 00:17:23.459 } 00:17:23.459 } 00:17:23.459 ]' 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.459 13:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.719 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:23.719 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:24.660 13:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.660 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.920 00:17:24.920 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.920 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.920 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.181 { 00:17:25.181 "cntlid": 17, 00:17:25.181 "qid": 0, 00:17:25.181 "state": "enabled", 00:17:25.181 "thread": "nvmf_tgt_poll_group_000", 00:17:25.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:25.181 "listen_address": { 00:17:25.181 "trtype": "TCP", 00:17:25.181 "adrfam": "IPv4", 00:17:25.181 "traddr": "10.0.0.2", 00:17:25.181 "trsvcid": "4420" 00:17:25.181 }, 00:17:25.181 "peer_address": { 00:17:25.181 "trtype": "TCP", 00:17:25.181 "adrfam": "IPv4", 00:17:25.181 "traddr": "10.0.0.1", 00:17:25.181 "trsvcid": "42058" 00:17:25.181 }, 00:17:25.181 "auth": { 00:17:25.181 "state": "completed", 00:17:25.181 "digest": "sha256", 00:17:25.181 "dhgroup": "ffdhe3072" 00:17:25.181 } 00:17:25.181 } 00:17:25.181 ]' 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.181 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.442 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:25.443 13:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.384 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.385 13:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.646 00:17:26.646 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.646 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.646 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.907 { 00:17:26.907 "cntlid": 19, 00:17:26.907 "qid": 0, 00:17:26.907 "state": "enabled", 00:17:26.907 "thread": "nvmf_tgt_poll_group_000", 00:17:26.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:26.907 "listen_address": { 00:17:26.907 "trtype": "TCP", 00:17:26.907 "adrfam": "IPv4", 00:17:26.907 "traddr": "10.0.0.2", 00:17:26.907 "trsvcid": "4420" 00:17:26.907 }, 00:17:26.907 "peer_address": { 00:17:26.907 "trtype": "TCP", 00:17:26.907 "adrfam": "IPv4", 00:17:26.907 "traddr": "10.0.0.1", 00:17:26.907 "trsvcid": "42086" 00:17:26.907 }, 00:17:26.907 "auth": { 00:17:26.907 "state": "completed", 00:17:26.907 "digest": "sha256", 00:17:26.907 "dhgroup": "ffdhe3072" 00:17:26.907 } 00:17:26.907 } 00:17:26.907 ]' 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:26.907 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.168 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.168 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.168 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.168 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:27.168 13:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:28.111 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.111 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:28.111 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.111 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.111 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.111 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.111 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.112 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.373 00:17:28.373 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.373 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.373 13:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.635 { 00:17:28.635 "cntlid": 21, 00:17:28.635 "qid": 0, 00:17:28.635 "state": "enabled", 00:17:28.635 "thread": "nvmf_tgt_poll_group_000", 00:17:28.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:28.635 "listen_address": { 00:17:28.635 "trtype": "TCP", 00:17:28.635 "adrfam": "IPv4", 00:17:28.635 "traddr": "10.0.0.2", 00:17:28.635 "trsvcid": "4420" 00:17:28.635 }, 00:17:28.635 "peer_address": { 00:17:28.635 "trtype": "TCP", 00:17:28.635 "adrfam": "IPv4", 00:17:28.635 "traddr": "10.0.0.1", 00:17:28.635 "trsvcid": "42110" 00:17:28.635 }, 00:17:28.635 "auth": { 00:17:28.635 "state": "completed", 00:17:28.635 "digest": "sha256", 00:17:28.635 "dhgroup": "ffdhe3072" 00:17:28.635 } 00:17:28.635 } 00:17:28.635 ]' 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.635 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.896 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.896 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.896 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.896 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:28.896 13:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:29.838 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:29.839 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.100 00:17:30.100 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.100 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.100 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.362 { 00:17:30.362 "cntlid": 23, 00:17:30.362 "qid": 0, 00:17:30.362 "state": "enabled", 00:17:30.362 "thread": "nvmf_tgt_poll_group_000", 00:17:30.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:30.362 "listen_address": { 00:17:30.362 "trtype": "TCP", 00:17:30.362 "adrfam": "IPv4", 00:17:30.362 "traddr": "10.0.0.2", 00:17:30.362 "trsvcid": "4420" 00:17:30.362 }, 00:17:30.362 "peer_address": { 00:17:30.362 "trtype": "TCP", 00:17:30.362 "adrfam": "IPv4", 00:17:30.362 "traddr": "10.0.0.1", 00:17:30.362 "trsvcid": "37558" 00:17:30.362 }, 00:17:30.362 "auth": { 00:17:30.362 "state": "completed", 00:17:30.362 "digest": "sha256", 00:17:30.362 "dhgroup": "ffdhe3072" 00:17:30.362 } 00:17:30.362 } 00:17:30.362 ]' 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.362 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.624 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.624 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.624 13:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.624 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:30.624 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.568 13:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.568 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.828 00:17:31.828 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.828 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.828 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.089 { 00:17:32.089 "cntlid": 25, 00:17:32.089 "qid": 0, 00:17:32.089 "state": "enabled", 00:17:32.089 "thread": "nvmf_tgt_poll_group_000", 00:17:32.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:32.089 "listen_address": { 00:17:32.089 "trtype": "TCP", 00:17:32.089 "adrfam": "IPv4", 00:17:32.089 "traddr": "10.0.0.2", 00:17:32.089 "trsvcid": "4420" 00:17:32.089 }, 00:17:32.089 "peer_address": { 00:17:32.089 "trtype": "TCP", 00:17:32.089 "adrfam": "IPv4", 00:17:32.089 "traddr": "10.0.0.1", 00:17:32.089 "trsvcid": "37582" 00:17:32.089 }, 00:17:32.089 "auth": { 00:17:32.089 "state": "completed", 00:17:32.089 "digest": "sha256", 00:17:32.089 "dhgroup": "ffdhe4096" 00:17:32.089 } 00:17:32.089 } 00:17:32.089 ]' 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.089 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.350 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.350 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.350 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.350 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.350 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.350 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:32.350 13:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.315 13:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.575 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.836 { 00:17:33.836 "cntlid": 27, 00:17:33.836 "qid": 0, 00:17:33.836 "state": "enabled", 00:17:33.836 "thread": "nvmf_tgt_poll_group_000", 00:17:33.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:33.836 "listen_address": { 00:17:33.836 "trtype": "TCP", 00:17:33.836 "adrfam": "IPv4", 00:17:33.836 "traddr": "10.0.0.2", 00:17:33.836 "trsvcid": "4420" 00:17:33.836 }, 00:17:33.836 "peer_address": { 00:17:33.836 "trtype": "TCP", 00:17:33.836 "adrfam": "IPv4", 00:17:33.836 "traddr": "10.0.0.1", 00:17:33.836 "trsvcid": "37626" 00:17:33.836 }, 00:17:33.836 "auth": { 00:17:33.836 "state": "completed", 00:17:33.836 "digest": "sha256", 00:17:33.836 "dhgroup": "ffdhe4096" 00:17:33.836 } 00:17:33.836 } 00:17:33.836 ]' 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.836 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.098 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.098 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.098 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.098 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.098 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.098 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:34.098 13:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.048 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.308 00:17:35.568 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.568 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.568 13:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.568 { 00:17:35.568 "cntlid": 29, 00:17:35.568 "qid": 0, 00:17:35.568 "state": "enabled", 00:17:35.568 "thread": "nvmf_tgt_poll_group_000", 00:17:35.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:35.568 "listen_address": { 00:17:35.568 "trtype": "TCP", 00:17:35.568 "adrfam": "IPv4", 00:17:35.568 "traddr": "10.0.0.2", 00:17:35.568 "trsvcid": "4420" 00:17:35.568 }, 00:17:35.568 "peer_address": { 00:17:35.568 "trtype": "TCP", 00:17:35.568 "adrfam": "IPv4", 00:17:35.568 "traddr": "10.0.0.1", 00:17:35.568 "trsvcid": "37652" 00:17:35.568 }, 00:17:35.568 "auth": { 00:17:35.568 "state": "completed", 00:17:35.568 "digest": "sha256", 00:17:35.568 "dhgroup": "ffdhe4096" 00:17:35.568 } 00:17:35.568 } 00:17:35.568 ]' 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.568 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.828 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.828 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.828 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.828 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.828 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.828 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:35.828 13:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.767 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.027 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.288 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.288 { 00:17:37.288 "cntlid": 31, 00:17:37.288 "qid": 0, 00:17:37.288 "state": "enabled", 00:17:37.288 "thread": "nvmf_tgt_poll_group_000", 00:17:37.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:37.288 "listen_address": { 00:17:37.288 "trtype": "TCP", 00:17:37.288 "adrfam": "IPv4", 00:17:37.288 "traddr": "10.0.0.2", 00:17:37.288 "trsvcid": "4420" 00:17:37.288 }, 00:17:37.288 "peer_address": { 00:17:37.288 "trtype": "TCP", 00:17:37.288 "adrfam": "IPv4", 00:17:37.288 "traddr": "10.0.0.1", 00:17:37.288 "trsvcid": "37678" 00:17:37.288 }, 00:17:37.288 "auth": { 00:17:37.288 "state": "completed", 00:17:37.288 "digest": "sha256", 00:17:37.288 "dhgroup": "ffdhe4096" 00:17:37.288 } 00:17:37.288 } 00:17:37.288 ]' 00:17:37.288 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.549 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.549 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.549 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.549 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.549 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.549 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.549 13:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.811 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:37.811 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.383 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:38.383 13:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.643 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.904 00:17:38.904 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.904 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.904 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.164 { 00:17:39.164 "cntlid": 33, 00:17:39.164 "qid": 0, 00:17:39.164 "state": "enabled", 00:17:39.164 "thread": "nvmf_tgt_poll_group_000", 00:17:39.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:39.164 "listen_address": { 00:17:39.164 "trtype": "TCP", 00:17:39.164 "adrfam": "IPv4", 00:17:39.164 "traddr": "10.0.0.2", 00:17:39.164 "trsvcid": "4420" 00:17:39.164 }, 00:17:39.164 "peer_address": { 00:17:39.164 "trtype": "TCP", 00:17:39.164 "adrfam": "IPv4", 00:17:39.164 "traddr": "10.0.0.1", 00:17:39.164 "trsvcid": "37712" 00:17:39.164 }, 00:17:39.164 "auth": { 00:17:39.164 "state": "completed", 00:17:39.164 "digest": "sha256", 00:17:39.164 "dhgroup": "ffdhe6144" 00:17:39.164 } 00:17:39.164 } 00:17:39.164 ]' 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.164 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.424 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.424 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.424 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.424 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:39.424 13:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.364 13:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.938 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.938 { 00:17:40.938 "cntlid": 35, 00:17:40.938 "qid": 0, 00:17:40.938 "state": "enabled", 00:17:40.938 "thread": "nvmf_tgt_poll_group_000", 00:17:40.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:40.938 "listen_address": { 00:17:40.938 "trtype": "TCP", 00:17:40.938 "adrfam": "IPv4", 00:17:40.938 "traddr": "10.0.0.2", 00:17:40.938 "trsvcid": "4420" 00:17:40.938 }, 00:17:40.938 "peer_address": { 00:17:40.938 "trtype": "TCP", 00:17:40.938 "adrfam": "IPv4", 00:17:40.938 "traddr": "10.0.0.1", 00:17:40.938 "trsvcid": "46344" 00:17:40.938 }, 00:17:40.938 "auth": { 00:17:40.938 "state": "completed", 00:17:40.938 "digest": "sha256", 00:17:40.938 "dhgroup": "ffdhe6144" 00:17:40.938 } 00:17:40.938 } 00:17:40.938 ]' 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.938 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.199 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.199 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.199 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.199 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.199 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.199 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:41.199 13:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.141 13:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.714 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.714 { 00:17:42.714 "cntlid": 37, 00:17:42.714 "qid": 0, 00:17:42.714 "state": "enabled", 00:17:42.714 "thread": "nvmf_tgt_poll_group_000", 00:17:42.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:42.714 "listen_address": { 00:17:42.714 "trtype": "TCP", 00:17:42.714 "adrfam": "IPv4", 00:17:42.714 "traddr": "10.0.0.2", 00:17:42.714 "trsvcid": "4420" 00:17:42.714 }, 00:17:42.714 "peer_address": { 00:17:42.714 "trtype": "TCP", 00:17:42.714 "adrfam": "IPv4", 00:17:42.714 "traddr": "10.0.0.1", 00:17:42.714 "trsvcid": "46376" 00:17:42.714 }, 00:17:42.714 "auth": { 00:17:42.714 "state": "completed", 00:17:42.714 "digest": "sha256", 00:17:42.714 "dhgroup": "ffdhe6144" 00:17:42.714 } 00:17:42.714 } 00:17:42.714 ]' 00:17:42.714 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.974 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.974 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.974 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:42.974 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.974 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.974 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.974 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.234 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:43.234 13:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:43.804 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:44.064 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.065 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.332 00:17:44.628 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.628 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.628 13:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.628 { 00:17:44.628 "cntlid": 39, 00:17:44.628 "qid": 0, 00:17:44.628 "state": "enabled", 00:17:44.628 "thread": "nvmf_tgt_poll_group_000", 00:17:44.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:44.628 "listen_address": { 00:17:44.628 "trtype": "TCP", 00:17:44.628 "adrfam": "IPv4", 00:17:44.628 "traddr": "10.0.0.2", 00:17:44.628 "trsvcid": "4420" 00:17:44.628 }, 00:17:44.628 "peer_address": { 00:17:44.628 "trtype": "TCP", 00:17:44.628 "adrfam": "IPv4", 00:17:44.628 "traddr": "10.0.0.1", 00:17:44.628 "trsvcid": "46406" 00:17:44.628 }, 00:17:44.628 "auth": { 00:17:44.628 "state": "completed", 00:17:44.628 "digest": "sha256", 00:17:44.628 "dhgroup": "ffdhe6144" 00:17:44.628 } 00:17:44.628 } 00:17:44.628 ]' 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.628 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.903 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:44.903 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.903 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.903 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.903 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.903 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:44.903 13:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.847 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.417 00:17:46.417 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.417 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.417 13:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.676 { 00:17:46.676 "cntlid": 41, 00:17:46.676 "qid": 0, 00:17:46.676 "state": "enabled", 00:17:46.676 "thread": "nvmf_tgt_poll_group_000", 00:17:46.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:46.676 "listen_address": { 00:17:46.676 "trtype": "TCP", 00:17:46.676 "adrfam": "IPv4", 00:17:46.676 "traddr": "10.0.0.2", 00:17:46.676 "trsvcid": "4420" 00:17:46.676 }, 00:17:46.676 "peer_address": { 00:17:46.676 "trtype": "TCP", 00:17:46.676 "adrfam": "IPv4", 00:17:46.676 "traddr": "10.0.0.1", 00:17:46.676 "trsvcid": "46430" 00:17:46.676 }, 00:17:46.676 "auth": { 00:17:46.676 "state": "completed", 00:17:46.676 "digest": "sha256", 00:17:46.676 "dhgroup": "ffdhe8192" 00:17:46.676 } 00:17:46.676 } 00:17:46.676 ]' 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.676 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.936 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:46.936 13:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.876 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.446 00:17:48.446 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.446 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.446 13:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.705 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.705 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.705 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.705 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.705 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.705 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.705 { 00:17:48.706 "cntlid": 43, 00:17:48.706 "qid": 0, 00:17:48.706 "state": "enabled", 00:17:48.706 "thread": "nvmf_tgt_poll_group_000", 00:17:48.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:48.706 "listen_address": { 00:17:48.706 "trtype": "TCP", 00:17:48.706 "adrfam": "IPv4", 00:17:48.706 "traddr": "10.0.0.2", 00:17:48.706 "trsvcid": "4420" 00:17:48.706 }, 00:17:48.706 "peer_address": { 00:17:48.706 "trtype": "TCP", 00:17:48.706 "adrfam": "IPv4", 00:17:48.706 "traddr": "10.0.0.1", 00:17:48.706 "trsvcid": "46472" 00:17:48.706 }, 00:17:48.706 "auth": { 00:17:48.706 "state": "completed", 00:17:48.706 "digest": "sha256", 00:17:48.706 "dhgroup": "ffdhe8192" 00:17:48.706 } 00:17:48.706 } 00:17:48.706 ]' 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.706 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.966 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:48.967 13:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.911 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.912 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.482 00:17:50.482 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.482 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.482 13:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.743 { 00:17:50.743 "cntlid": 45, 00:17:50.743 "qid": 0, 00:17:50.743 "state": "enabled", 00:17:50.743 "thread": "nvmf_tgt_poll_group_000", 00:17:50.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:50.743 "listen_address": { 00:17:50.743 "trtype": "TCP", 00:17:50.743 "adrfam": "IPv4", 00:17:50.743 "traddr": "10.0.0.2", 00:17:50.743 "trsvcid": "4420" 00:17:50.743 }, 00:17:50.743 "peer_address": { 00:17:50.743 "trtype": "TCP", 00:17:50.743 "adrfam": "IPv4", 00:17:50.743 "traddr": "10.0.0.1", 00:17:50.743 "trsvcid": "45716" 00:17:50.743 }, 00:17:50.743 "auth": { 00:17:50.743 "state": "completed", 00:17:50.743 "digest": "sha256", 00:17:50.743 "dhgroup": "ffdhe8192" 00:17:50.743 } 00:17:50.743 } 00:17:50.743 ]' 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.743 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.003 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:51.003 13:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.945 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.516 00:17:52.516 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.516 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.516 13:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.776 { 00:17:52.776 "cntlid": 47, 00:17:52.776 "qid": 0, 00:17:52.776 "state": "enabled", 00:17:52.776 "thread": "nvmf_tgt_poll_group_000", 00:17:52.776 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:52.776 "listen_address": { 00:17:52.776 "trtype": "TCP", 00:17:52.776 "adrfam": "IPv4", 00:17:52.776 "traddr": "10.0.0.2", 00:17:52.776 "trsvcid": "4420" 00:17:52.776 }, 00:17:52.776 "peer_address": { 00:17:52.776 "trtype": "TCP", 00:17:52.776 "adrfam": "IPv4", 00:17:52.776 "traddr": "10.0.0.1", 00:17:52.776 "trsvcid": "45744" 00:17:52.776 }, 00:17:52.776 "auth": { 00:17:52.776 "state": "completed", 00:17:52.776 "digest": "sha256", 00:17:52.776 "dhgroup": "ffdhe8192" 00:17:52.776 } 00:17:52.776 } 00:17:52.776 ]' 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.776 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.035 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:53.035 13:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:53.605 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.605 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:53.605 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.605 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:53.866 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.127 00:17:54.127 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.127 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.127 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.387 { 00:17:54.387 "cntlid": 49, 00:17:54.387 "qid": 0, 00:17:54.387 "state": "enabled", 00:17:54.387 "thread": "nvmf_tgt_poll_group_000", 00:17:54.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:54.387 "listen_address": { 00:17:54.387 "trtype": "TCP", 00:17:54.387 "adrfam": "IPv4", 00:17:54.387 "traddr": "10.0.0.2", 00:17:54.387 "trsvcid": "4420" 00:17:54.387 }, 00:17:54.387 "peer_address": { 00:17:54.387 "trtype": "TCP", 00:17:54.387 "adrfam": "IPv4", 00:17:54.387 "traddr": "10.0.0.1", 00:17:54.387 "trsvcid": "45774" 00:17:54.387 }, 00:17:54.387 "auth": { 00:17:54.387 "state": "completed", 00:17:54.387 "digest": "sha384", 00:17:54.387 "dhgroup": "null" 00:17:54.387 } 00:17:54.387 } 00:17:54.387 ]' 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.387 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:54.388 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.388 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.388 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.388 13:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.648 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:54.648 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:55.591 13:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.591 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.853 00:17:55.853 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.853 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.853 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.115 { 00:17:56.115 "cntlid": 51, 00:17:56.115 "qid": 0, 00:17:56.115 "state": "enabled", 00:17:56.115 "thread": "nvmf_tgt_poll_group_000", 00:17:56.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:56.115 "listen_address": { 00:17:56.115 "trtype": "TCP", 00:17:56.115 "adrfam": "IPv4", 00:17:56.115 "traddr": "10.0.0.2", 00:17:56.115 "trsvcid": "4420" 00:17:56.115 }, 00:17:56.115 "peer_address": { 00:17:56.115 "trtype": "TCP", 00:17:56.115 "adrfam": "IPv4", 00:17:56.115 "traddr": "10.0.0.1", 00:17:56.115 "trsvcid": "45802" 00:17:56.115 }, 00:17:56.115 "auth": { 00:17:56.115 "state": "completed", 00:17:56.115 "digest": "sha384", 00:17:56.115 "dhgroup": "null" 00:17:56.115 } 00:17:56.115 } 00:17:56.115 ]' 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.115 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.377 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:56.377 13:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:17:57.319 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.319 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:57.319 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.319 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.319 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.320 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.580 00:17:57.581 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.581 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.581 13:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.581 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.581 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.581 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.581 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.843 { 00:17:57.843 "cntlid": 53, 00:17:57.843 "qid": 0, 00:17:57.843 "state": "enabled", 00:17:57.843 "thread": "nvmf_tgt_poll_group_000", 00:17:57.843 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:57.843 "listen_address": { 00:17:57.843 "trtype": "TCP", 00:17:57.843 "adrfam": "IPv4", 00:17:57.843 "traddr": "10.0.0.2", 00:17:57.843 "trsvcid": "4420" 00:17:57.843 }, 00:17:57.843 "peer_address": { 00:17:57.843 "trtype": "TCP", 00:17:57.843 "adrfam": "IPv4", 00:17:57.843 "traddr": "10.0.0.1", 00:17:57.843 "trsvcid": "45830" 00:17:57.843 }, 00:17:57.843 "auth": { 00:17:57.843 "state": "completed", 00:17:57.843 "digest": "sha384", 00:17:57.843 "dhgroup": "null" 00:17:57.843 } 00:17:57.843 } 00:17:57.843 ]' 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.843 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.105 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:58.105 13:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.677 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:58.939 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.199 00:17:59.199 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.199 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.199 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.460 { 00:17:59.460 "cntlid": 55, 00:17:59.460 "qid": 0, 00:17:59.460 "state": "enabled", 00:17:59.460 "thread": "nvmf_tgt_poll_group_000", 00:17:59.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:17:59.460 "listen_address": { 00:17:59.460 "trtype": "TCP", 00:17:59.460 "adrfam": "IPv4", 00:17:59.460 "traddr": "10.0.0.2", 00:17:59.460 "trsvcid": "4420" 00:17:59.460 }, 00:17:59.460 "peer_address": { 00:17:59.460 "trtype": "TCP", 00:17:59.460 "adrfam": "IPv4", 00:17:59.460 "traddr": "10.0.0.1", 00:17:59.460 "trsvcid": "45846" 00:17:59.460 }, 00:17:59.460 "auth": { 00:17:59.460 "state": "completed", 00:17:59.460 "digest": "sha384", 00:17:59.460 "dhgroup": "null" 00:17:59.460 } 00:17:59.460 } 00:17:59.460 ]' 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.460 13:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.721 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:17:59.721 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:00.662 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.662 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:00.663 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.663 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.663 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.663 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.663 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.663 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.663 13:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.663 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.924 00:18:00.924 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.924 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.924 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.198 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.198 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.198 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.198 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.198 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.198 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.198 { 00:18:01.198 "cntlid": 57, 00:18:01.198 "qid": 0, 00:18:01.198 "state": "enabled", 00:18:01.198 "thread": "nvmf_tgt_poll_group_000", 00:18:01.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:01.198 "listen_address": { 00:18:01.199 "trtype": "TCP", 00:18:01.199 "adrfam": "IPv4", 00:18:01.199 "traddr": "10.0.0.2", 00:18:01.199 "trsvcid": "4420" 00:18:01.199 }, 00:18:01.199 "peer_address": { 00:18:01.199 "trtype": "TCP", 00:18:01.199 "adrfam": "IPv4", 00:18:01.199 "traddr": "10.0.0.1", 00:18:01.199 "trsvcid": "33678" 00:18:01.199 }, 00:18:01.199 "auth": { 00:18:01.199 "state": "completed", 00:18:01.199 "digest": "sha384", 00:18:01.199 "dhgroup": "ffdhe2048" 00:18:01.199 } 00:18:01.199 } 00:18:01.199 ]' 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.199 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.465 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:01.465 13:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:02.034 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.294 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.295 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.295 13:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.554 00:18:02.554 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.554 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.554 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.818 { 00:18:02.818 "cntlid": 59, 00:18:02.818 "qid": 0, 00:18:02.818 "state": "enabled", 00:18:02.818 "thread": "nvmf_tgt_poll_group_000", 00:18:02.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:02.818 "listen_address": { 00:18:02.818 "trtype": "TCP", 00:18:02.818 "adrfam": "IPv4", 00:18:02.818 "traddr": "10.0.0.2", 00:18:02.818 "trsvcid": "4420" 00:18:02.818 }, 00:18:02.818 "peer_address": { 00:18:02.818 "trtype": "TCP", 00:18:02.818 "adrfam": "IPv4", 00:18:02.818 "traddr": "10.0.0.1", 00:18:02.818 "trsvcid": "33704" 00:18:02.818 }, 00:18:02.818 "auth": { 00:18:02.818 "state": "completed", 00:18:02.818 "digest": "sha384", 00:18:02.818 "dhgroup": "ffdhe2048" 00:18:02.818 } 00:18:02.818 } 00:18:02.818 ]' 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.818 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.819 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.819 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.819 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.819 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.085 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:03.085 13:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.024 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.284 00:18:04.284 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.284 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.284 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.545 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.545 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.545 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.545 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.545 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.545 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.545 { 00:18:04.545 "cntlid": 61, 00:18:04.545 "qid": 0, 00:18:04.545 "state": "enabled", 00:18:04.545 "thread": "nvmf_tgt_poll_group_000", 00:18:04.545 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:04.545 "listen_address": { 00:18:04.545 "trtype": "TCP", 00:18:04.545 "adrfam": "IPv4", 00:18:04.545 "traddr": "10.0.0.2", 00:18:04.545 "trsvcid": "4420" 00:18:04.545 }, 00:18:04.545 "peer_address": { 00:18:04.545 "trtype": "TCP", 00:18:04.545 "adrfam": "IPv4", 00:18:04.545 "traddr": "10.0.0.1", 00:18:04.545 "trsvcid": "33734" 00:18:04.545 }, 00:18:04.545 "auth": { 00:18:04.545 "state": "completed", 00:18:04.545 "digest": "sha384", 00:18:04.545 "dhgroup": "ffdhe2048" 00:18:04.545 } 00:18:04.545 } 00:18:04.545 ]' 00:18:04.545 13:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.545 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.545 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.545 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.546 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.546 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.546 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.546 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.806 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:04.806 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.746 13:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:05.746 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.006 00:18:06.006 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.006 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.006 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.266 { 00:18:06.266 "cntlid": 63, 00:18:06.266 "qid": 0, 00:18:06.266 "state": "enabled", 00:18:06.266 "thread": "nvmf_tgt_poll_group_000", 00:18:06.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:06.266 "listen_address": { 00:18:06.266 "trtype": "TCP", 00:18:06.266 "adrfam": "IPv4", 00:18:06.266 "traddr": "10.0.0.2", 00:18:06.266 "trsvcid": "4420" 00:18:06.266 }, 00:18:06.266 "peer_address": { 00:18:06.266 "trtype": "TCP", 00:18:06.266 "adrfam": "IPv4", 00:18:06.266 "traddr": "10.0.0.1", 00:18:06.266 "trsvcid": "33770" 00:18:06.266 }, 00:18:06.266 "auth": { 00:18:06.266 "state": "completed", 00:18:06.266 "digest": "sha384", 00:18:06.266 "dhgroup": "ffdhe2048" 00:18:06.266 } 00:18:06.266 } 00:18:06.266 ]' 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.266 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.526 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:06.526 13:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:07.468 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.468 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:07.468 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.468 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.468 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.468 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.468 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.469 13:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.729 00:18:07.729 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.729 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.729 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.989 { 00:18:07.989 "cntlid": 65, 00:18:07.989 "qid": 0, 00:18:07.989 "state": "enabled", 00:18:07.989 "thread": "nvmf_tgt_poll_group_000", 00:18:07.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:07.989 "listen_address": { 00:18:07.989 "trtype": "TCP", 00:18:07.989 "adrfam": "IPv4", 00:18:07.989 "traddr": "10.0.0.2", 00:18:07.989 "trsvcid": "4420" 00:18:07.989 }, 00:18:07.989 "peer_address": { 00:18:07.989 "trtype": "TCP", 00:18:07.989 "adrfam": "IPv4", 00:18:07.989 "traddr": "10.0.0.1", 00:18:07.989 "trsvcid": "33796" 00:18:07.989 }, 00:18:07.989 "auth": { 00:18:07.989 "state": "completed", 00:18:07.989 "digest": "sha384", 00:18:07.989 "dhgroup": "ffdhe3072" 00:18:07.989 } 00:18:07.989 } 00:18:07.989 ]' 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.989 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.249 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:08.249 13:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:08.820 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.821 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.821 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.821 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.821 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.821 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.821 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:08.821 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.081 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.342 00:18:09.342 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.342 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.342 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.603 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.603 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.603 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.603 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.603 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.603 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.603 { 00:18:09.603 "cntlid": 67, 00:18:09.603 "qid": 0, 00:18:09.603 "state": "enabled", 00:18:09.603 "thread": "nvmf_tgt_poll_group_000", 00:18:09.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:09.603 "listen_address": { 00:18:09.603 "trtype": "TCP", 00:18:09.603 "adrfam": "IPv4", 00:18:09.603 "traddr": "10.0.0.2", 00:18:09.603 "trsvcid": "4420" 00:18:09.603 }, 00:18:09.603 "peer_address": { 00:18:09.603 "trtype": "TCP", 00:18:09.603 "adrfam": "IPv4", 00:18:09.603 "traddr": "10.0.0.1", 00:18:09.603 "trsvcid": "43068" 00:18:09.603 }, 00:18:09.603 "auth": { 00:18:09.603 "state": "completed", 00:18:09.603 "digest": "sha384", 00:18:09.603 "dhgroup": "ffdhe3072" 00:18:09.603 } 00:18:09.603 } 00:18:09.603 ]' 00:18:09.603 13:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.603 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.603 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.603 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.603 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.603 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.603 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.603 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.863 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:09.864 13:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.806 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.807 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.807 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.067 00:18:11.067 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.067 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.067 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.328 { 00:18:11.328 "cntlid": 69, 00:18:11.328 "qid": 0, 00:18:11.328 "state": "enabled", 00:18:11.328 "thread": "nvmf_tgt_poll_group_000", 00:18:11.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:11.328 "listen_address": { 00:18:11.328 "trtype": "TCP", 00:18:11.328 "adrfam": "IPv4", 00:18:11.328 "traddr": "10.0.0.2", 00:18:11.328 "trsvcid": "4420" 00:18:11.328 }, 00:18:11.328 "peer_address": { 00:18:11.328 "trtype": "TCP", 00:18:11.328 "adrfam": "IPv4", 00:18:11.328 "traddr": "10.0.0.1", 00:18:11.328 "trsvcid": "43086" 00:18:11.328 }, 00:18:11.328 "auth": { 00:18:11.328 "state": "completed", 00:18:11.328 "digest": "sha384", 00:18:11.328 "dhgroup": "ffdhe3072" 00:18:11.328 } 00:18:11.328 } 00:18:11.328 ]' 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.328 13:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.588 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:11.588 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.532 13:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.791 00:18:12.791 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.791 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.791 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.051 { 00:18:13.051 "cntlid": 71, 00:18:13.051 "qid": 0, 00:18:13.051 "state": "enabled", 00:18:13.051 "thread": "nvmf_tgt_poll_group_000", 00:18:13.051 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:13.051 "listen_address": { 00:18:13.051 "trtype": "TCP", 00:18:13.051 "adrfam": "IPv4", 00:18:13.051 "traddr": "10.0.0.2", 00:18:13.051 "trsvcid": "4420" 00:18:13.051 }, 00:18:13.051 "peer_address": { 00:18:13.051 "trtype": "TCP", 00:18:13.051 "adrfam": "IPv4", 00:18:13.051 "traddr": "10.0.0.1", 00:18:13.051 "trsvcid": "43116" 00:18:13.051 }, 00:18:13.051 "auth": { 00:18:13.051 "state": "completed", 00:18:13.051 "digest": "sha384", 00:18:13.051 "dhgroup": "ffdhe3072" 00:18:13.051 } 00:18:13.051 } 00:18:13.051 ]' 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.051 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.310 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:13.310 13:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.249 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.508 00:18:14.508 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.508 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.508 13:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.767 { 00:18:14.767 "cntlid": 73, 00:18:14.767 "qid": 0, 00:18:14.767 "state": "enabled", 00:18:14.767 "thread": "nvmf_tgt_poll_group_000", 00:18:14.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:14.767 "listen_address": { 00:18:14.767 "trtype": "TCP", 00:18:14.767 "adrfam": "IPv4", 00:18:14.767 "traddr": "10.0.0.2", 00:18:14.767 "trsvcid": "4420" 00:18:14.767 }, 00:18:14.767 "peer_address": { 00:18:14.767 "trtype": "TCP", 00:18:14.767 "adrfam": "IPv4", 00:18:14.767 "traddr": "10.0.0.1", 00:18:14.767 "trsvcid": "43144" 00:18:14.767 }, 00:18:14.767 "auth": { 00:18:14.767 "state": "completed", 00:18:14.767 "digest": "sha384", 00:18:14.767 "dhgroup": "ffdhe4096" 00:18:14.767 } 00:18:14.767 } 00:18:14.767 ]' 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.767 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.768 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:14.768 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.768 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.768 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.768 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.028 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:15.028 13:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.971 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.231 00:18:16.231 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.232 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.232 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.492 { 00:18:16.492 "cntlid": 75, 00:18:16.492 "qid": 0, 00:18:16.492 "state": "enabled", 00:18:16.492 "thread": "nvmf_tgt_poll_group_000", 00:18:16.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:16.492 "listen_address": { 00:18:16.492 "trtype": "TCP", 00:18:16.492 "adrfam": "IPv4", 00:18:16.492 "traddr": "10.0.0.2", 00:18:16.492 "trsvcid": "4420" 00:18:16.492 }, 00:18:16.492 "peer_address": { 00:18:16.492 "trtype": "TCP", 00:18:16.492 "adrfam": "IPv4", 00:18:16.492 "traddr": "10.0.0.1", 00:18:16.492 "trsvcid": "43168" 00:18:16.492 }, 00:18:16.492 "auth": { 00:18:16.492 "state": "completed", 00:18:16.492 "digest": "sha384", 00:18:16.492 "dhgroup": "ffdhe4096" 00:18:16.492 } 00:18:16.492 } 00:18:16.492 ]' 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.492 13:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.492 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.492 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.492 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.753 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:16.753 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:17.700 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.700 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:17.700 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.700 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.700 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.701 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.701 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.701 13:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.701 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.962 00:18:17.962 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.962 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.962 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.223 { 00:18:18.223 "cntlid": 77, 00:18:18.223 "qid": 0, 00:18:18.223 "state": "enabled", 00:18:18.223 "thread": "nvmf_tgt_poll_group_000", 00:18:18.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:18.223 "listen_address": { 00:18:18.223 "trtype": "TCP", 00:18:18.223 "adrfam": "IPv4", 00:18:18.223 "traddr": "10.0.0.2", 00:18:18.223 "trsvcid": "4420" 00:18:18.223 }, 00:18:18.223 "peer_address": { 00:18:18.223 "trtype": "TCP", 00:18:18.223 "adrfam": "IPv4", 00:18:18.223 "traddr": "10.0.0.1", 00:18:18.223 "trsvcid": "43194" 00:18:18.223 }, 00:18:18.223 "auth": { 00:18:18.223 "state": "completed", 00:18:18.223 "digest": "sha384", 00:18:18.223 "dhgroup": "ffdhe4096" 00:18:18.223 } 00:18:18.223 } 00:18:18.223 ]' 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.223 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.485 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:18.485 13:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.428 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.429 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.429 13:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.689 00:18:19.689 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.689 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.689 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.950 { 00:18:19.950 "cntlid": 79, 00:18:19.950 "qid": 0, 00:18:19.950 "state": "enabled", 00:18:19.950 "thread": "nvmf_tgt_poll_group_000", 00:18:19.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:19.950 "listen_address": { 00:18:19.950 "trtype": "TCP", 00:18:19.950 "adrfam": "IPv4", 00:18:19.950 "traddr": "10.0.0.2", 00:18:19.950 "trsvcid": "4420" 00:18:19.950 }, 00:18:19.950 "peer_address": { 00:18:19.950 "trtype": "TCP", 00:18:19.950 "adrfam": "IPv4", 00:18:19.950 "traddr": "10.0.0.1", 00:18:19.950 "trsvcid": "36180" 00:18:19.950 }, 00:18:19.950 "auth": { 00:18:19.950 "state": "completed", 00:18:19.950 "digest": "sha384", 00:18:19.950 "dhgroup": "ffdhe4096" 00:18:19.950 } 00:18:19.950 } 00:18:19.950 ]' 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.950 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.210 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:20.210 13:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.152 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.413 00:18:21.674 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.674 13:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.674 { 00:18:21.674 "cntlid": 81, 00:18:21.674 "qid": 0, 00:18:21.674 "state": "enabled", 00:18:21.674 "thread": "nvmf_tgt_poll_group_000", 00:18:21.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:21.674 "listen_address": { 00:18:21.674 "trtype": "TCP", 00:18:21.674 "adrfam": "IPv4", 00:18:21.674 "traddr": "10.0.0.2", 00:18:21.674 "trsvcid": "4420" 00:18:21.674 }, 00:18:21.674 "peer_address": { 00:18:21.674 "trtype": "TCP", 00:18:21.674 "adrfam": "IPv4", 00:18:21.674 "traddr": "10.0.0.1", 00:18:21.674 "trsvcid": "36218" 00:18:21.674 }, 00:18:21.674 "auth": { 00:18:21.674 "state": "completed", 00:18:21.674 "digest": "sha384", 00:18:21.674 "dhgroup": "ffdhe6144" 00:18:21.674 } 00:18:21.674 } 00:18:21.674 ]' 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.674 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.934 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.934 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.934 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.934 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.934 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.934 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:21.934 13:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:22.875 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:23.134 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:23.134 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.134 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:23.134 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:23.134 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.134 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.135 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.135 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.135 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.135 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.135 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.135 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.135 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.394 00:18:23.394 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.394 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.394 13:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.654 { 00:18:23.654 "cntlid": 83, 00:18:23.654 "qid": 0, 00:18:23.654 "state": "enabled", 00:18:23.654 "thread": "nvmf_tgt_poll_group_000", 00:18:23.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:23.654 "listen_address": { 00:18:23.654 "trtype": "TCP", 00:18:23.654 "adrfam": "IPv4", 00:18:23.654 "traddr": "10.0.0.2", 00:18:23.654 "trsvcid": "4420" 00:18:23.654 }, 00:18:23.654 "peer_address": { 00:18:23.654 "trtype": "TCP", 00:18:23.654 "adrfam": "IPv4", 00:18:23.654 "traddr": "10.0.0.1", 00:18:23.654 "trsvcid": "36236" 00:18:23.654 }, 00:18:23.654 "auth": { 00:18:23.654 "state": "completed", 00:18:23.654 "digest": "sha384", 00:18:23.654 "dhgroup": "ffdhe6144" 00:18:23.654 } 00:18:23.654 } 00:18:23.654 ]' 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.654 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.915 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:23.915 13:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.925 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.230 00:18:25.230 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.230 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.230 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.522 { 00:18:25.522 "cntlid": 85, 00:18:25.522 "qid": 0, 00:18:25.522 "state": "enabled", 00:18:25.522 "thread": "nvmf_tgt_poll_group_000", 00:18:25.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:25.522 "listen_address": { 00:18:25.522 "trtype": "TCP", 00:18:25.522 "adrfam": "IPv4", 00:18:25.522 "traddr": "10.0.0.2", 00:18:25.522 "trsvcid": "4420" 00:18:25.522 }, 00:18:25.522 "peer_address": { 00:18:25.522 "trtype": "TCP", 00:18:25.522 "adrfam": "IPv4", 00:18:25.522 "traddr": "10.0.0.1", 00:18:25.522 "trsvcid": "36276" 00:18:25.522 }, 00:18:25.522 "auth": { 00:18:25.522 "state": "completed", 00:18:25.522 "digest": "sha384", 00:18:25.522 "dhgroup": "ffdhe6144" 00:18:25.522 } 00:18:25.522 } 00:18:25.522 ]' 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.522 13:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.783 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:25.783 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:26.354 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.615 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:26.615 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.615 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.615 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.615 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.615 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.615 13:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.615 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.185 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.185 { 00:18:27.185 "cntlid": 87, 00:18:27.185 "qid": 0, 00:18:27.185 "state": "enabled", 00:18:27.185 "thread": "nvmf_tgt_poll_group_000", 00:18:27.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:27.185 "listen_address": { 00:18:27.185 "trtype": "TCP", 00:18:27.185 "adrfam": "IPv4", 00:18:27.185 "traddr": "10.0.0.2", 00:18:27.185 "trsvcid": "4420" 00:18:27.185 }, 00:18:27.185 "peer_address": { 00:18:27.185 "trtype": "TCP", 00:18:27.185 "adrfam": "IPv4", 00:18:27.185 "traddr": "10.0.0.1", 00:18:27.185 "trsvcid": "36304" 00:18:27.185 }, 00:18:27.185 "auth": { 00:18:27.185 "state": "completed", 00:18:27.185 "digest": "sha384", 00:18:27.185 "dhgroup": "ffdhe6144" 00:18:27.185 } 00:18:27.185 } 00:18:27.185 ]' 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.185 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.446 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.446 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.446 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.446 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.446 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.446 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:27.446 13:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:28.386 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.387 13:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.957 00:18:28.957 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.957 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.957 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.218 { 00:18:29.218 "cntlid": 89, 00:18:29.218 "qid": 0, 00:18:29.218 "state": "enabled", 00:18:29.218 "thread": "nvmf_tgt_poll_group_000", 00:18:29.218 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:29.218 "listen_address": { 00:18:29.218 "trtype": "TCP", 00:18:29.218 "adrfam": "IPv4", 00:18:29.218 "traddr": "10.0.0.2", 00:18:29.218 "trsvcid": "4420" 00:18:29.218 }, 00:18:29.218 "peer_address": { 00:18:29.218 "trtype": "TCP", 00:18:29.218 "adrfam": "IPv4", 00:18:29.218 "traddr": "10.0.0.1", 00:18:29.218 "trsvcid": "36322" 00:18:29.218 }, 00:18:29.218 "auth": { 00:18:29.218 "state": "completed", 00:18:29.218 "digest": "sha384", 00:18:29.218 "dhgroup": "ffdhe8192" 00:18:29.218 } 00:18:29.218 } 00:18:29.218 ]' 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.218 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.479 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.479 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.479 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.479 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:29.479 13:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.419 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.420 13:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.989 00:18:30.989 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.989 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.989 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.250 { 00:18:31.250 "cntlid": 91, 00:18:31.250 "qid": 0, 00:18:31.250 "state": "enabled", 00:18:31.250 "thread": "nvmf_tgt_poll_group_000", 00:18:31.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:31.250 "listen_address": { 00:18:31.250 "trtype": "TCP", 00:18:31.250 "adrfam": "IPv4", 00:18:31.250 "traddr": "10.0.0.2", 00:18:31.250 "trsvcid": "4420" 00:18:31.250 }, 00:18:31.250 "peer_address": { 00:18:31.250 "trtype": "TCP", 00:18:31.250 "adrfam": "IPv4", 00:18:31.250 "traddr": "10.0.0.1", 00:18:31.250 "trsvcid": "56726" 00:18:31.250 }, 00:18:31.250 "auth": { 00:18:31.250 "state": "completed", 00:18:31.250 "digest": "sha384", 00:18:31.250 "dhgroup": "ffdhe8192" 00:18:31.250 } 00:18:31.250 } 00:18:31.250 ]' 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.250 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.510 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:31.510 13:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.450 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.451 13:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.019 00:18:33.019 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.019 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.019 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.279 { 00:18:33.279 "cntlid": 93, 00:18:33.279 "qid": 0, 00:18:33.279 "state": "enabled", 00:18:33.279 "thread": "nvmf_tgt_poll_group_000", 00:18:33.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:33.279 "listen_address": { 00:18:33.279 "trtype": "TCP", 00:18:33.279 "adrfam": "IPv4", 00:18:33.279 "traddr": "10.0.0.2", 00:18:33.279 "trsvcid": "4420" 00:18:33.279 }, 00:18:33.279 "peer_address": { 00:18:33.279 "trtype": "TCP", 00:18:33.279 "adrfam": "IPv4", 00:18:33.279 "traddr": "10.0.0.1", 00:18:33.279 "trsvcid": "56744" 00:18:33.279 }, 00:18:33.279 "auth": { 00:18:33.279 "state": "completed", 00:18:33.279 "digest": "sha384", 00:18:33.279 "dhgroup": "ffdhe8192" 00:18:33.279 } 00:18:33.279 } 00:18:33.279 ]' 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.279 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.539 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:33.539 13:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:34.110 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.372 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.373 13:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.945 00:18:34.945 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.945 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.945 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.205 { 00:18:35.205 "cntlid": 95, 00:18:35.205 "qid": 0, 00:18:35.205 "state": "enabled", 00:18:35.205 "thread": "nvmf_tgt_poll_group_000", 00:18:35.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:35.205 "listen_address": { 00:18:35.205 "trtype": "TCP", 00:18:35.205 "adrfam": "IPv4", 00:18:35.205 "traddr": "10.0.0.2", 00:18:35.205 "trsvcid": "4420" 00:18:35.205 }, 00:18:35.205 "peer_address": { 00:18:35.205 "trtype": "TCP", 00:18:35.205 "adrfam": "IPv4", 00:18:35.205 "traddr": "10.0.0.1", 00:18:35.205 "trsvcid": "56762" 00:18:35.205 }, 00:18:35.205 "auth": { 00:18:35.205 "state": "completed", 00:18:35.205 "digest": "sha384", 00:18:35.205 "dhgroup": "ffdhe8192" 00:18:35.205 } 00:18:35.205 } 00:18:35.205 ]' 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.205 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.466 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:35.466 13:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.410 13:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.683 00:18:36.683 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.683 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.684 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.943 { 00:18:36.943 "cntlid": 97, 00:18:36.943 "qid": 0, 00:18:36.943 "state": "enabled", 00:18:36.943 "thread": "nvmf_tgt_poll_group_000", 00:18:36.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:36.943 "listen_address": { 00:18:36.943 "trtype": "TCP", 00:18:36.943 "adrfam": "IPv4", 00:18:36.943 "traddr": "10.0.0.2", 00:18:36.943 "trsvcid": "4420" 00:18:36.943 }, 00:18:36.943 "peer_address": { 00:18:36.943 "trtype": "TCP", 00:18:36.943 "adrfam": "IPv4", 00:18:36.943 "traddr": "10.0.0.1", 00:18:36.943 "trsvcid": "56798" 00:18:36.943 }, 00:18:36.943 "auth": { 00:18:36.943 "state": "completed", 00:18:36.943 "digest": "sha512", 00:18:36.943 "dhgroup": "null" 00:18:36.943 } 00:18:36.943 } 00:18:36.943 ]' 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.943 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.202 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:37.202 13:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.142 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.143 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.143 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.403 00:18:38.403 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.403 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.403 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.664 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.664 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.664 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.665 13:23:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.665 { 00:18:38.665 "cntlid": 99, 00:18:38.665 "qid": 0, 00:18:38.665 "state": "enabled", 00:18:38.665 "thread": "nvmf_tgt_poll_group_000", 00:18:38.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:38.665 "listen_address": { 00:18:38.665 "trtype": "TCP", 00:18:38.665 "adrfam": "IPv4", 00:18:38.665 "traddr": "10.0.0.2", 00:18:38.665 "trsvcid": "4420" 00:18:38.665 }, 00:18:38.665 "peer_address": { 00:18:38.665 "trtype": "TCP", 00:18:38.665 "adrfam": "IPv4", 00:18:38.665 "traddr": "10.0.0.1", 00:18:38.665 "trsvcid": "56810" 00:18:38.665 }, 00:18:38.665 "auth": { 00:18:38.665 "state": "completed", 00:18:38.665 "digest": "sha512", 00:18:38.665 "dhgroup": "null" 00:18:38.665 } 00:18:38.665 } 00:18:38.665 ]' 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.665 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.926 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:38.926 13:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:39.866 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.126 00:18:40.126 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.126 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.126 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.385 { 00:18:40.385 "cntlid": 101, 00:18:40.385 "qid": 0, 00:18:40.385 "state": "enabled", 00:18:40.385 "thread": "nvmf_tgt_poll_group_000", 00:18:40.385 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:40.385 "listen_address": { 00:18:40.385 "trtype": "TCP", 00:18:40.385 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.2", 00:18:40.385 "trsvcid": "4420" 00:18:40.385 }, 00:18:40.385 "peer_address": { 00:18:40.385 "trtype": "TCP", 00:18:40.385 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.1", 00:18:40.385 "trsvcid": "36988" 00:18:40.385 }, 00:18:40.385 "auth": { 00:18:40.385 "state": "completed", 00:18:40.385 "digest": "sha512", 00:18:40.385 "dhgroup": "null" 00:18:40.385 } 00:18:40.385 } 00:18:40.385 ]' 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.385 13:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.645 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:40.645 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:41.584 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.585 13:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.585 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.845 00:18:41.845 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.845 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.845 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.845 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.845 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.845 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.845 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.106 { 00:18:42.106 "cntlid": 103, 00:18:42.106 "qid": 0, 00:18:42.106 "state": "enabled", 00:18:42.106 "thread": "nvmf_tgt_poll_group_000", 00:18:42.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:42.106 "listen_address": { 00:18:42.106 "trtype": "TCP", 00:18:42.106 "adrfam": "IPv4", 00:18:42.106 "traddr": "10.0.0.2", 00:18:42.106 "trsvcid": "4420" 00:18:42.106 }, 00:18:42.106 "peer_address": { 00:18:42.106 "trtype": "TCP", 00:18:42.106 "adrfam": "IPv4", 00:18:42.106 "traddr": "10.0.0.1", 00:18:42.106 "trsvcid": "37010" 00:18:42.106 }, 00:18:42.106 "auth": { 00:18:42.106 "state": "completed", 00:18:42.106 "digest": "sha512", 00:18:42.106 "dhgroup": "null" 00:18:42.106 } 00:18:42.106 } 00:18:42.106 ]' 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.106 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.366 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:42.366 13:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:42.934 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.934 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.934 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.934 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.935 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.935 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:42.935 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.935 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:42.935 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.193 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.452 00:18:43.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.452 13:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.711 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.711 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.711 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.711 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.711 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.711 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.712 { 00:18:43.712 "cntlid": 105, 00:18:43.712 "qid": 0, 00:18:43.712 "state": "enabled", 00:18:43.712 "thread": "nvmf_tgt_poll_group_000", 00:18:43.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:43.712 "listen_address": { 00:18:43.712 "trtype": "TCP", 00:18:43.712 "adrfam": "IPv4", 00:18:43.712 "traddr": "10.0.0.2", 00:18:43.712 "trsvcid": "4420" 00:18:43.712 }, 00:18:43.712 "peer_address": { 00:18:43.712 "trtype": "TCP", 00:18:43.712 "adrfam": "IPv4", 00:18:43.712 "traddr": "10.0.0.1", 00:18:43.712 "trsvcid": "37030" 00:18:43.712 }, 00:18:43.712 "auth": { 00:18:43.712 "state": "completed", 00:18:43.712 "digest": "sha512", 00:18:43.712 "dhgroup": "ffdhe2048" 00:18:43.712 } 00:18:43.712 } 00:18:43.712 ]' 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.712 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.971 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:43.971 13:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.909 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.169 00:18:45.169 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.169 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.169 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.429 { 00:18:45.429 "cntlid": 107, 00:18:45.429 "qid": 0, 00:18:45.429 "state": "enabled", 00:18:45.429 "thread": "nvmf_tgt_poll_group_000", 00:18:45.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:45.429 "listen_address": { 00:18:45.429 "trtype": "TCP", 00:18:45.429 "adrfam": "IPv4", 00:18:45.429 "traddr": "10.0.0.2", 00:18:45.429 "trsvcid": "4420" 00:18:45.429 }, 00:18:45.429 "peer_address": { 00:18:45.429 "trtype": "TCP", 00:18:45.429 "adrfam": "IPv4", 00:18:45.429 "traddr": "10.0.0.1", 00:18:45.429 "trsvcid": "37074" 00:18:45.429 }, 00:18:45.429 "auth": { 00:18:45.429 "state": "completed", 00:18:45.429 "digest": "sha512", 00:18:45.429 "dhgroup": "ffdhe2048" 00:18:45.429 } 00:18:45.429 } 00:18:45.429 ]' 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.429 13:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.691 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:45.691 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:46.262 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.522 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:46.522 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.522 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.522 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.522 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.522 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.522 13:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.522 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.523 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.523 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.782 00:18:46.782 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:46.782 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:46.782 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.044 { 00:18:47.044 "cntlid": 109, 00:18:47.044 "qid": 0, 00:18:47.044 "state": "enabled", 00:18:47.044 "thread": "nvmf_tgt_poll_group_000", 00:18:47.044 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:47.044 "listen_address": { 00:18:47.044 "trtype": "TCP", 00:18:47.044 "adrfam": "IPv4", 00:18:47.044 "traddr": "10.0.0.2", 00:18:47.044 "trsvcid": "4420" 00:18:47.044 }, 00:18:47.044 "peer_address": { 00:18:47.044 "trtype": "TCP", 00:18:47.044 "adrfam": "IPv4", 00:18:47.044 "traddr": "10.0.0.1", 00:18:47.044 "trsvcid": "37092" 00:18:47.044 }, 00:18:47.044 "auth": { 00:18:47.044 "state": "completed", 00:18:47.044 "digest": "sha512", 00:18:47.044 "dhgroup": "ffdhe2048" 00:18:47.044 } 00:18:47.044 } 00:18:47.044 ]' 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.044 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.305 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:47.305 13:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.245 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.505 00:18:48.506 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.506 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.506 13:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.766 { 00:18:48.766 "cntlid": 111, 00:18:48.766 "qid": 0, 00:18:48.766 "state": "enabled", 00:18:48.766 "thread": "nvmf_tgt_poll_group_000", 00:18:48.766 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:48.766 "listen_address": { 00:18:48.766 "trtype": "TCP", 00:18:48.766 "adrfam": "IPv4", 00:18:48.766 "traddr": "10.0.0.2", 00:18:48.766 "trsvcid": "4420" 00:18:48.766 }, 00:18:48.766 "peer_address": { 00:18:48.766 "trtype": "TCP", 00:18:48.766 "adrfam": "IPv4", 00:18:48.766 "traddr": "10.0.0.1", 00:18:48.766 "trsvcid": "37106" 00:18:48.766 }, 00:18:48.766 "auth": { 00:18:48.766 "state": "completed", 00:18:48.766 "digest": "sha512", 00:18:48.766 "dhgroup": "ffdhe2048" 00:18:48.766 } 00:18:48.766 } 00:18:48.766 ]' 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.766 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.026 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:49.027 13:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.967 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.967 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.228 00:18:50.228 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.228 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.228 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.489 { 00:18:50.489 "cntlid": 113, 00:18:50.489 "qid": 0, 00:18:50.489 "state": "enabled", 00:18:50.489 "thread": "nvmf_tgt_poll_group_000", 00:18:50.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:50.489 "listen_address": { 00:18:50.489 "trtype": "TCP", 00:18:50.489 "adrfam": "IPv4", 00:18:50.489 "traddr": "10.0.0.2", 00:18:50.489 "trsvcid": "4420" 00:18:50.489 }, 00:18:50.489 "peer_address": { 00:18:50.489 "trtype": "TCP", 00:18:50.489 "adrfam": "IPv4", 00:18:50.489 "traddr": "10.0.0.1", 00:18:50.489 "trsvcid": "44626" 00:18:50.489 }, 00:18:50.489 "auth": { 00:18:50.489 "state": "completed", 00:18:50.489 "digest": "sha512", 00:18:50.489 "dhgroup": "ffdhe3072" 00:18:50.489 } 00:18:50.489 } 00:18:50.489 ]' 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.489 13:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.751 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:50.751 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.693 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.693 13:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:51.693 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:51.693 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.693 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.693 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:51.693 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:51.693 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.694 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.694 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.694 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.694 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.694 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.694 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.694 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.953 00:18:51.953 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.953 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.953 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.213 { 00:18:52.213 "cntlid": 115, 00:18:52.213 "qid": 0, 00:18:52.213 "state": "enabled", 00:18:52.213 "thread": "nvmf_tgt_poll_group_000", 00:18:52.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:52.213 "listen_address": { 00:18:52.213 "trtype": "TCP", 00:18:52.213 "adrfam": "IPv4", 00:18:52.213 "traddr": "10.0.0.2", 00:18:52.213 "trsvcid": "4420" 00:18:52.213 }, 00:18:52.213 "peer_address": { 00:18:52.213 "trtype": "TCP", 00:18:52.213 "adrfam": "IPv4", 00:18:52.213 "traddr": "10.0.0.1", 00:18:52.213 "trsvcid": "44654" 00:18:52.213 }, 00:18:52.213 "auth": { 00:18:52.213 "state": "completed", 00:18:52.213 "digest": "sha512", 00:18:52.213 "dhgroup": "ffdhe3072" 00:18:52.213 } 00:18:52.213 } 00:18:52.213 ]' 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.213 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.473 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:52.473 13:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:53.050 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.050 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.050 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.050 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.310 13:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.572 00:18:53.572 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.572 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.572 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.832 { 00:18:53.832 "cntlid": 117, 00:18:53.832 "qid": 0, 00:18:53.832 "state": "enabled", 00:18:53.832 "thread": "nvmf_tgt_poll_group_000", 00:18:53.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:53.832 "listen_address": { 00:18:53.832 "trtype": "TCP", 00:18:53.832 "adrfam": "IPv4", 00:18:53.832 "traddr": "10.0.0.2", 00:18:53.832 "trsvcid": "4420" 00:18:53.832 }, 00:18:53.832 "peer_address": { 00:18:53.832 "trtype": "TCP", 00:18:53.832 "adrfam": "IPv4", 00:18:53.832 "traddr": "10.0.0.1", 00:18:53.832 "trsvcid": "44682" 00:18:53.832 }, 00:18:53.832 "auth": { 00:18:53.832 "state": "completed", 00:18:53.832 "digest": "sha512", 00:18:53.832 "dhgroup": "ffdhe3072" 00:18:53.832 } 00:18:53.832 } 00:18:53.832 ]' 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.832 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.093 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:54.093 13:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.057 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:55.317 00:18:55.318 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.318 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.318 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.578 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.578 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.578 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.578 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.578 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.578 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.578 { 00:18:55.578 "cntlid": 119, 00:18:55.578 "qid": 0, 00:18:55.578 "state": "enabled", 00:18:55.578 "thread": "nvmf_tgt_poll_group_000", 00:18:55.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:55.578 "listen_address": { 00:18:55.578 "trtype": "TCP", 00:18:55.578 "adrfam": "IPv4", 00:18:55.578 "traddr": "10.0.0.2", 00:18:55.578 "trsvcid": "4420" 00:18:55.578 }, 00:18:55.578 "peer_address": { 00:18:55.578 "trtype": "TCP", 00:18:55.578 "adrfam": "IPv4", 00:18:55.578 "traddr": "10.0.0.1", 00:18:55.578 "trsvcid": "44708" 00:18:55.578 }, 00:18:55.578 "auth": { 00:18:55.578 "state": "completed", 00:18:55.578 "digest": "sha512", 00:18:55.578 "dhgroup": "ffdhe3072" 00:18:55.578 } 00:18:55.578 } 00:18:55.578 ]' 00:18:55.578 13:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.578 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.578 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.578 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:55.578 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.578 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.578 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.578 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.839 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:55.839 13:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:18:56.778 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.778 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:56.778 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.778 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.778 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.779 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.039 00:18:57.299 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.299 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.299 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.299 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.299 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.299 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.299 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.300 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.300 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.300 { 00:18:57.300 "cntlid": 121, 00:18:57.300 "qid": 0, 00:18:57.300 "state": "enabled", 00:18:57.300 "thread": "nvmf_tgt_poll_group_000", 00:18:57.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:57.300 "listen_address": { 00:18:57.300 "trtype": "TCP", 00:18:57.300 "adrfam": "IPv4", 00:18:57.300 "traddr": "10.0.0.2", 00:18:57.300 "trsvcid": "4420" 00:18:57.300 }, 00:18:57.300 "peer_address": { 00:18:57.300 "trtype": "TCP", 00:18:57.300 "adrfam": "IPv4", 00:18:57.300 "traddr": "10.0.0.1", 00:18:57.300 "trsvcid": "44730" 00:18:57.300 }, 00:18:57.300 "auth": { 00:18:57.300 "state": "completed", 00:18:57.300 "digest": "sha512", 00:18:57.300 "dhgroup": "ffdhe4096" 00:18:57.300 } 00:18:57.300 } 00:18:57.300 ]' 00:18:57.300 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.300 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.300 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.560 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:57.560 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.560 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.560 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.560 13:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.560 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:57.560 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.502 13:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.764 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.025 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.025 { 00:18:59.025 "cntlid": 123, 00:18:59.025 "qid": 0, 00:18:59.025 "state": "enabled", 00:18:59.025 "thread": "nvmf_tgt_poll_group_000", 00:18:59.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:18:59.025 "listen_address": { 00:18:59.025 "trtype": "TCP", 00:18:59.025 "adrfam": "IPv4", 00:18:59.025 "traddr": "10.0.0.2", 00:18:59.025 "trsvcid": "4420" 00:18:59.025 }, 00:18:59.025 "peer_address": { 00:18:59.025 "trtype": "TCP", 00:18:59.025 "adrfam": "IPv4", 00:18:59.025 "traddr": "10.0.0.1", 00:18:59.025 "trsvcid": "44766" 00:18:59.025 }, 00:18:59.025 "auth": { 00:18:59.025 "state": "completed", 00:18:59.025 "digest": "sha512", 00:18:59.025 "dhgroup": "ffdhe4096" 00:18:59.025 } 00:18:59.025 } 00:18:59.025 ]' 00:18:59.025 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.285 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.286 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.286 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:59.286 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.286 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.286 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.286 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.546 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:18:59.547 13:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.118 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.379 13:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.641 00:19:00.641 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:00.641 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:00.641 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:00.902 { 00:19:00.902 "cntlid": 125, 00:19:00.902 "qid": 0, 00:19:00.902 "state": "enabled", 00:19:00.902 "thread": "nvmf_tgt_poll_group_000", 00:19:00.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:00.902 "listen_address": { 00:19:00.902 "trtype": "TCP", 00:19:00.902 "adrfam": "IPv4", 00:19:00.902 "traddr": "10.0.0.2", 00:19:00.902 "trsvcid": "4420" 00:19:00.902 }, 00:19:00.902 "peer_address": { 00:19:00.902 "trtype": "TCP", 00:19:00.902 "adrfam": "IPv4", 00:19:00.902 "traddr": "10.0.0.1", 00:19:00.902 "trsvcid": "49734" 00:19:00.902 }, 00:19:00.902 "auth": { 00:19:00.902 "state": "completed", 00:19:00.902 "digest": "sha512", 00:19:00.902 "dhgroup": "ffdhe4096" 00:19:00.902 } 00:19:00.902 } 00:19:00.902 ]' 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.902 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.163 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:19:01.163 13:23:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:19:02.110 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.110 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.110 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.110 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.110 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.110 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.111 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:02.372 00:19:02.372 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.372 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.372 13:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:02.634 { 00:19:02.634 "cntlid": 127, 00:19:02.634 "qid": 0, 00:19:02.634 "state": "enabled", 00:19:02.634 "thread": "nvmf_tgt_poll_group_000", 00:19:02.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:02.634 "listen_address": { 00:19:02.634 "trtype": "TCP", 00:19:02.634 "adrfam": "IPv4", 00:19:02.634 "traddr": "10.0.0.2", 00:19:02.634 "trsvcid": "4420" 00:19:02.634 }, 00:19:02.634 "peer_address": { 00:19:02.634 "trtype": "TCP", 00:19:02.634 "adrfam": "IPv4", 00:19:02.634 "traddr": "10.0.0.1", 00:19:02.634 "trsvcid": "49768" 00:19:02.634 }, 00:19:02.634 "auth": { 00:19:02.634 "state": "completed", 00:19:02.634 "digest": "sha512", 00:19:02.634 "dhgroup": "ffdhe4096" 00:19:02.634 } 00:19:02.634 } 00:19:02.634 ]' 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.634 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.893 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:02.893 13:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.833 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.121 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.404 { 00:19:04.404 "cntlid": 129, 00:19:04.404 "qid": 0, 00:19:04.404 "state": "enabled", 00:19:04.404 "thread": "nvmf_tgt_poll_group_000", 00:19:04.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:04.404 "listen_address": { 00:19:04.404 "trtype": "TCP", 00:19:04.404 "adrfam": "IPv4", 00:19:04.404 "traddr": "10.0.0.2", 00:19:04.404 "trsvcid": "4420" 00:19:04.404 }, 00:19:04.404 "peer_address": { 00:19:04.404 "trtype": "TCP", 00:19:04.404 "adrfam": "IPv4", 00:19:04.404 "traddr": "10.0.0.1", 00:19:04.404 "trsvcid": "49792" 00:19:04.404 }, 00:19:04.404 "auth": { 00:19:04.404 "state": "completed", 00:19:04.404 "digest": "sha512", 00:19:04.404 "dhgroup": "ffdhe6144" 00:19:04.404 } 00:19:04.404 } 00:19:04.404 ]' 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.404 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.694 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:04.694 13:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.694 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.694 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.694 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.694 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:19:04.694 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.635 13:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.635 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.207 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.207 { 00:19:06.207 "cntlid": 131, 00:19:06.207 "qid": 0, 00:19:06.207 "state": "enabled", 00:19:06.207 "thread": "nvmf_tgt_poll_group_000", 00:19:06.207 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:06.207 "listen_address": { 00:19:06.207 "trtype": "TCP", 00:19:06.207 "adrfam": "IPv4", 00:19:06.207 "traddr": "10.0.0.2", 00:19:06.207 "trsvcid": "4420" 00:19:06.207 }, 00:19:06.207 "peer_address": { 00:19:06.207 "trtype": "TCP", 00:19:06.207 "adrfam": "IPv4", 00:19:06.207 "traddr": "10.0.0.1", 00:19:06.207 "trsvcid": "49824" 00:19:06.207 }, 00:19:06.207 "auth": { 00:19:06.207 "state": "completed", 00:19:06.207 "digest": "sha512", 00:19:06.207 "dhgroup": "ffdhe6144" 00:19:06.207 } 00:19:06.207 } 00:19:06.207 ]' 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.207 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.467 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:06.467 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.467 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.467 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.467 13:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.728 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:19:06.728 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.298 13:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.558 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.819 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.080 { 00:19:08.080 "cntlid": 133, 00:19:08.080 "qid": 0, 00:19:08.080 "state": "enabled", 00:19:08.080 "thread": "nvmf_tgt_poll_group_000", 00:19:08.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:08.080 "listen_address": { 00:19:08.080 "trtype": "TCP", 00:19:08.080 "adrfam": "IPv4", 00:19:08.080 "traddr": "10.0.0.2", 00:19:08.080 "trsvcid": "4420" 00:19:08.080 }, 00:19:08.080 "peer_address": { 00:19:08.080 "trtype": "TCP", 00:19:08.080 "adrfam": "IPv4", 00:19:08.080 "traddr": "10.0.0.1", 00:19:08.080 "trsvcid": "49860" 00:19:08.080 }, 00:19:08.080 "auth": { 00:19:08.080 "state": "completed", 00:19:08.080 "digest": "sha512", 00:19:08.080 "dhgroup": "ffdhe6144" 00:19:08.080 } 00:19:08.080 } 00:19:08.080 ]' 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.080 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.342 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.342 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.342 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.342 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.342 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.342 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:19:08.342 13:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:09.284 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.545 13:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.806 00:19:09.806 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.806 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.806 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.066 { 00:19:10.066 "cntlid": 135, 00:19:10.066 "qid": 0, 00:19:10.066 "state": "enabled", 00:19:10.066 "thread": "nvmf_tgt_poll_group_000", 00:19:10.066 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:10.066 "listen_address": { 00:19:10.066 "trtype": "TCP", 00:19:10.066 "adrfam": "IPv4", 00:19:10.066 "traddr": "10.0.0.2", 00:19:10.066 "trsvcid": "4420" 00:19:10.066 }, 00:19:10.066 "peer_address": { 00:19:10.066 "trtype": "TCP", 00:19:10.066 "adrfam": "IPv4", 00:19:10.066 "traddr": "10.0.0.1", 00:19:10.066 "trsvcid": "54494" 00:19:10.066 }, 00:19:10.066 "auth": { 00:19:10.066 "state": "completed", 00:19:10.066 "digest": "sha512", 00:19:10.066 "dhgroup": "ffdhe6144" 00:19:10.066 } 00:19:10.066 } 00:19:10.066 ]' 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.066 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.327 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:10.327 13:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.898 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:10.898 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.160 13:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.733 00:19:11.733 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.733 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.733 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.995 { 00:19:11.995 "cntlid": 137, 00:19:11.995 "qid": 0, 00:19:11.995 "state": "enabled", 00:19:11.995 "thread": "nvmf_tgt_poll_group_000", 00:19:11.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:11.995 "listen_address": { 00:19:11.995 "trtype": "TCP", 00:19:11.995 "adrfam": "IPv4", 00:19:11.995 "traddr": "10.0.0.2", 00:19:11.995 "trsvcid": "4420" 00:19:11.995 }, 00:19:11.995 "peer_address": { 00:19:11.995 "trtype": "TCP", 00:19:11.995 "adrfam": "IPv4", 00:19:11.995 "traddr": "10.0.0.1", 00:19:11.995 "trsvcid": "54528" 00:19:11.995 }, 00:19:11.995 "auth": { 00:19:11.995 "state": "completed", 00:19:11.995 "digest": "sha512", 00:19:11.995 "dhgroup": "ffdhe8192" 00:19:11.995 } 00:19:11.995 } 00:19:11.995 ]' 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.995 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.256 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:19:12.256 13:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:12.827 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.087 13:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.657 00:19:13.657 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:13.657 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:13.657 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.917 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.917 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.917 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.917 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.917 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.917 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:13.917 { 00:19:13.917 "cntlid": 139, 00:19:13.918 "qid": 0, 00:19:13.918 "state": "enabled", 00:19:13.918 "thread": "nvmf_tgt_poll_group_000", 00:19:13.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:13.918 "listen_address": { 00:19:13.918 "trtype": "TCP", 00:19:13.918 "adrfam": "IPv4", 00:19:13.918 "traddr": "10.0.0.2", 00:19:13.918 "trsvcid": "4420" 00:19:13.918 }, 00:19:13.918 "peer_address": { 00:19:13.918 "trtype": "TCP", 00:19:13.918 "adrfam": "IPv4", 00:19:13.918 "traddr": "10.0.0.1", 00:19:13.918 "trsvcid": "54550" 00:19:13.918 }, 00:19:13.918 "auth": { 00:19:13.918 "state": "completed", 00:19:13.918 "digest": "sha512", 00:19:13.918 "dhgroup": "ffdhe8192" 00:19:13.918 } 00:19:13.918 } 00:19:13.918 ]' 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.918 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:19:14.178 13:23:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: --dhchap-ctrl-secret DHHC-1:02:YTk3YWY0ODJhZGI5NDQ2ODQ1NmVlNjMzNmM0NmE4ZWFjNzI4NTRjZTdjMDNlNDhhs1bPKQ==: 00:19:14.748 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.010 13:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.583 00:19:15.583 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.583 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.583 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.844 { 00:19:15.844 "cntlid": 141, 00:19:15.844 "qid": 0, 00:19:15.844 "state": "enabled", 00:19:15.844 "thread": "nvmf_tgt_poll_group_000", 00:19:15.844 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:15.844 "listen_address": { 00:19:15.844 "trtype": "TCP", 00:19:15.844 "adrfam": "IPv4", 00:19:15.844 "traddr": "10.0.0.2", 00:19:15.844 "trsvcid": "4420" 00:19:15.844 }, 00:19:15.844 "peer_address": { 00:19:15.844 "trtype": "TCP", 00:19:15.844 "adrfam": "IPv4", 00:19:15.844 "traddr": "10.0.0.1", 00:19:15.844 "trsvcid": "54576" 00:19:15.844 }, 00:19:15.844 "auth": { 00:19:15.844 "state": "completed", 00:19:15.844 "digest": "sha512", 00:19:15.844 "dhgroup": "ffdhe8192" 00:19:15.844 } 00:19:15.844 } 00:19:15.844 ]' 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.844 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.106 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:19:16.106 13:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:01:NWIwMDMwYjRkNThkYmRlNmI5ZDk0MGJmMTkwYWQ5Y2FfLao7: 00:19:16.680 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:16.942 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:16.943 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.517 00:19:17.517 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.517 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.517 13:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.779 { 00:19:17.779 "cntlid": 143, 00:19:17.779 "qid": 0, 00:19:17.779 "state": "enabled", 00:19:17.779 "thread": "nvmf_tgt_poll_group_000", 00:19:17.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:17.779 "listen_address": { 00:19:17.779 "trtype": "TCP", 00:19:17.779 "adrfam": "IPv4", 00:19:17.779 "traddr": "10.0.0.2", 00:19:17.779 "trsvcid": "4420" 00:19:17.779 }, 00:19:17.779 "peer_address": { 00:19:17.779 "trtype": "TCP", 00:19:17.779 "adrfam": "IPv4", 00:19:17.779 "traddr": "10.0.0.1", 00:19:17.779 "trsvcid": "54600" 00:19:17.779 }, 00:19:17.779 "auth": { 00:19:17.779 "state": "completed", 00:19:17.779 "digest": "sha512", 00:19:17.779 "dhgroup": "ffdhe8192" 00:19:17.779 } 00:19:17.779 } 00:19:17.779 ]' 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.779 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.041 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:18.041 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.986 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.558 00:19:19.558 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.559 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.559 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.820 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.820 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.820 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.820 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.820 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.820 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.820 { 00:19:19.820 "cntlid": 145, 00:19:19.820 "qid": 0, 00:19:19.820 "state": "enabled", 00:19:19.820 "thread": "nvmf_tgt_poll_group_000", 00:19:19.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:19.820 "listen_address": { 00:19:19.820 "trtype": "TCP", 00:19:19.820 "adrfam": "IPv4", 00:19:19.820 "traddr": "10.0.0.2", 00:19:19.820 "trsvcid": "4420" 00:19:19.820 }, 00:19:19.820 "peer_address": { 00:19:19.820 "trtype": "TCP", 00:19:19.820 "adrfam": "IPv4", 00:19:19.820 "traddr": "10.0.0.1", 00:19:19.821 "trsvcid": "54618" 00:19:19.821 }, 00:19:19.821 "auth": { 00:19:19.821 "state": "completed", 00:19:19.821 "digest": "sha512", 00:19:19.821 "dhgroup": "ffdhe8192" 00:19:19.821 } 00:19:19.821 } 00:19:19.821 ]' 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.821 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.082 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:19:20.082 13:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:00:ODgwNTUwZDg4MTI1N2RhYmNmYjIwYWYyMGU3OGZhYzQ2NmQwMWJmMDAxNTM3Y2FiZ+ZIxw==: --dhchap-ctrl-secret DHHC-1:03:YzBlMDk4MGVkNjUxOTE5NzI3ZmM2Yzk5NGZlNTBmNTk4YzhiNTUyNzNlZWMzOGQ2NGE5NDZjZGVjN2MyNWU0NoqsP/Q=: 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:21.027 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:21.289 request: 00:19:21.289 { 00:19:21.289 "name": "nvme0", 00:19:21.289 "trtype": "tcp", 00:19:21.289 "traddr": "10.0.0.2", 00:19:21.289 "adrfam": "ipv4", 00:19:21.289 "trsvcid": "4420", 00:19:21.289 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:21.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:21.289 "prchk_reftag": false, 00:19:21.289 "prchk_guard": false, 00:19:21.289 "hdgst": false, 00:19:21.289 "ddgst": false, 00:19:21.289 "dhchap_key": "key2", 00:19:21.289 "allow_unrecognized_csi": false, 00:19:21.289 "method": "bdev_nvme_attach_controller", 00:19:21.289 "req_id": 1 00:19:21.289 } 00:19:21.290 Got JSON-RPC error response 00:19:21.290 response: 00:19:21.290 { 00:19:21.290 "code": -5, 00:19:21.290 "message": "Input/output error" 00:19:21.290 } 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.290 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:21.862 request: 00:19:21.862 { 00:19:21.862 "name": "nvme0", 00:19:21.862 "trtype": "tcp", 00:19:21.862 "traddr": "10.0.0.2", 00:19:21.862 "adrfam": "ipv4", 00:19:21.862 "trsvcid": "4420", 00:19:21.862 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:21.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:21.862 "prchk_reftag": false, 00:19:21.862 "prchk_guard": false, 00:19:21.862 "hdgst": false, 00:19:21.862 "ddgst": false, 00:19:21.862 "dhchap_key": "key1", 00:19:21.862 "dhchap_ctrlr_key": "ckey2", 00:19:21.862 "allow_unrecognized_csi": false, 00:19:21.862 "method": "bdev_nvme_attach_controller", 00:19:21.862 "req_id": 1 00:19:21.862 } 00:19:21.862 Got JSON-RPC error response 00:19:21.862 response: 00:19:21.862 { 00:19:21.862 "code": -5, 00:19:21.862 "message": "Input/output error" 00:19:21.862 } 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.862 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.437 request: 00:19:22.437 { 00:19:22.437 "name": "nvme0", 00:19:22.437 "trtype": "tcp", 00:19:22.437 "traddr": "10.0.0.2", 00:19:22.437 "adrfam": "ipv4", 00:19:22.437 "trsvcid": "4420", 00:19:22.437 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:22.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:22.437 "prchk_reftag": false, 00:19:22.437 "prchk_guard": false, 00:19:22.437 "hdgst": false, 00:19:22.437 "ddgst": false, 00:19:22.437 "dhchap_key": "key1", 00:19:22.437 "dhchap_ctrlr_key": "ckey1", 00:19:22.437 "allow_unrecognized_csi": false, 00:19:22.437 "method": "bdev_nvme_attach_controller", 00:19:22.437 "req_id": 1 00:19:22.437 } 00:19:22.437 Got JSON-RPC error response 00:19:22.437 response: 00:19:22.437 { 00:19:22.437 "code": -5, 00:19:22.437 "message": "Input/output error" 00:19:22.437 } 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 901773 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 901773 ']' 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 901773 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:22.437 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901773 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901773' 00:19:22.438 killing process with pid 901773 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 901773 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 901773 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.438 13:23:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=929017 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 929017 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 929017 ']' 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.698 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.640 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.640 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.640 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.640 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.640 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.640 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.640 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:23.641 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 929017 00:19:23.641 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 929017 ']' 00:19:23.641 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.641 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.641 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.641 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.641 13:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.641 null0 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yHN 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.uIr ]] 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.uIr 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.QUf 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.641 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.901 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.k67 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.k67 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.rlW 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.nW7 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.nW7 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.EM6 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.902 13:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.845 nvme0n1 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.845 { 00:19:24.845 "cntlid": 1, 00:19:24.845 "qid": 0, 00:19:24.845 "state": "enabled", 00:19:24.845 "thread": "nvmf_tgt_poll_group_000", 00:19:24.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:24.845 "listen_address": { 00:19:24.845 "trtype": "TCP", 00:19:24.845 "adrfam": "IPv4", 00:19:24.845 "traddr": "10.0.0.2", 00:19:24.845 "trsvcid": "4420" 00:19:24.845 }, 00:19:24.845 "peer_address": { 00:19:24.845 "trtype": "TCP", 00:19:24.845 "adrfam": "IPv4", 00:19:24.845 "traddr": "10.0.0.1", 00:19:24.845 "trsvcid": "52638" 00:19:24.845 }, 00:19:24.845 "auth": { 00:19:24.845 "state": "completed", 00:19:24.845 "digest": "sha512", 00:19:24.845 "dhgroup": "ffdhe8192" 00:19:24.845 } 00:19:24.845 } 00:19:24.845 ]' 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.845 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.846 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:24.846 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.107 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.107 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.107 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.107 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:25.107 13:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.049 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.050 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.311 request: 00:19:26.311 { 00:19:26.311 "name": "nvme0", 00:19:26.311 "trtype": "tcp", 00:19:26.311 "traddr": "10.0.0.2", 00:19:26.311 "adrfam": "ipv4", 00:19:26.311 "trsvcid": "4420", 00:19:26.311 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.311 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:26.311 "prchk_reftag": false, 00:19:26.311 "prchk_guard": false, 00:19:26.311 "hdgst": false, 00:19:26.311 "ddgst": false, 00:19:26.311 "dhchap_key": "key3", 00:19:26.311 "allow_unrecognized_csi": false, 00:19:26.311 "method": "bdev_nvme_attach_controller", 00:19:26.311 "req_id": 1 00:19:26.311 } 00:19:26.311 Got JSON-RPC error response 00:19:26.311 response: 00:19:26.311 { 00:19:26.311 "code": -5, 00:19:26.311 "message": "Input/output error" 00:19:26.311 } 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:26.311 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.572 13:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:26.572 request: 00:19:26.572 { 00:19:26.572 "name": "nvme0", 00:19:26.572 "trtype": "tcp", 00:19:26.572 "traddr": "10.0.0.2", 00:19:26.572 "adrfam": "ipv4", 00:19:26.573 "trsvcid": "4420", 00:19:26.573 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:26.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:26.573 "prchk_reftag": false, 00:19:26.573 "prchk_guard": false, 00:19:26.573 "hdgst": false, 00:19:26.573 "ddgst": false, 00:19:26.573 "dhchap_key": "key3", 00:19:26.573 "allow_unrecognized_csi": false, 00:19:26.573 "method": "bdev_nvme_attach_controller", 00:19:26.573 "req_id": 1 00:19:26.573 } 00:19:26.573 Got JSON-RPC error response 00:19:26.573 response: 00:19:26.573 { 00:19:26.573 "code": -5, 00:19:26.573 "message": "Input/output error" 00:19:26.573 } 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:26.835 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.836 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.836 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:27.097 request: 00:19:27.097 { 00:19:27.097 "name": "nvme0", 00:19:27.097 "trtype": "tcp", 00:19:27.097 "traddr": "10.0.0.2", 00:19:27.097 "adrfam": "ipv4", 00:19:27.097 "trsvcid": "4420", 00:19:27.097 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:27.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:27.097 "prchk_reftag": false, 00:19:27.097 "prchk_guard": false, 00:19:27.097 "hdgst": false, 00:19:27.097 "ddgst": false, 00:19:27.097 "dhchap_key": "key0", 00:19:27.097 "dhchap_ctrlr_key": "key1", 00:19:27.097 "allow_unrecognized_csi": false, 00:19:27.097 "method": "bdev_nvme_attach_controller", 00:19:27.097 "req_id": 1 00:19:27.097 } 00:19:27.097 Got JSON-RPC error response 00:19:27.098 response: 00:19:27.098 { 00:19:27.098 "code": -5, 00:19:27.098 "message": "Input/output error" 00:19:27.098 } 00:19:27.359 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:27.359 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:27.359 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:27.359 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:27.359 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:27.359 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:27.359 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:27.359 nvme0n1 00:19:27.620 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:27.620 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:27.620 13:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.620 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.620 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.620 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.881 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:19:27.881 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.881 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.881 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.881 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:27.881 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:27.881 13:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:28.825 nvme0n1 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:28.825 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.087 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.087 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:29.087 13:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 -l 0 --dhchap-secret DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: --dhchap-ctrl-secret DHHC-1:03:NmVlOGU4NDE2NzllYmZmYjk3NThjNzBkZGQ4NTY5OTk5ZDc0Y2NkYjc2ZDNjNDdiY2E0MzVlNTVjNzA4OTYyN3sY0UU=: 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:30.031 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:30.605 request: 00:19:30.605 { 00:19:30.605 "name": "nvme0", 00:19:30.605 "trtype": "tcp", 00:19:30.605 "traddr": "10.0.0.2", 00:19:30.605 "adrfam": "ipv4", 00:19:30.605 "trsvcid": "4420", 00:19:30.605 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:30.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:19:30.605 "prchk_reftag": false, 00:19:30.605 "prchk_guard": false, 00:19:30.605 "hdgst": false, 00:19:30.605 "ddgst": false, 00:19:30.605 "dhchap_key": "key1", 00:19:30.605 "allow_unrecognized_csi": false, 00:19:30.605 "method": "bdev_nvme_attach_controller", 00:19:30.605 "req_id": 1 00:19:30.605 } 00:19:30.605 Got JSON-RPC error response 00:19:30.605 response: 00:19:30.605 { 00:19:30.605 "code": -5, 00:19:30.605 "message": "Input/output error" 00:19:30.605 } 00:19:30.605 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:30.605 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.605 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.605 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.605 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:30.605 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:30.605 13:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:31.548 nvme0n1 00:19:31.548 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:31.548 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:31.548 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.548 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.548 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.548 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.810 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:31.810 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.810 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.810 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.810 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:31.810 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:31.810 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:32.086 nvme0n1 00:19:32.086 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:32.086 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:32.086 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.086 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.086 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.086 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.347 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:32.347 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: '' 2s 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: ]] 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTZjOTAzOGFlZTAzMWExOTE5NDkwYWRlNGE4Y2FmYTJbViba: 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:32.348 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:34.259 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:34.259 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:34.259 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:34.259 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:34.259 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:34.259 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: 2s 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: ]] 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NDI1MjRjMmExN2JmODg1NWZmNmVjODIzNTQ0Mjg1MzRmZTkwYzA0MDJjY2JmNDkyIyKgnQ==: 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:34.520 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:36.439 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:37.382 nvme0n1 00:19:37.382 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:37.382 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.382 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.382 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.382 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:37.382 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:37.953 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:38.213 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:38.213 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:38.213 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:38.475 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:39.048 request: 00:19:39.048 { 00:19:39.048 "name": "nvme0", 00:19:39.048 "dhchap_key": "key1", 00:19:39.048 "dhchap_ctrlr_key": "key3", 00:19:39.048 "method": "bdev_nvme_set_keys", 00:19:39.048 "req_id": 1 00:19:39.048 } 00:19:39.048 Got JSON-RPC error response 00:19:39.048 response: 00:19:39.048 { 00:19:39.048 "code": -13, 00:19:39.048 "message": "Permission denied" 00:19:39.048 } 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:39.048 13:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:40.435 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:41.379 nvme0n1 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:19:41.379 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.380 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:19:41.380 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.380 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.380 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:41.640 request: 00:19:41.640 { 00:19:41.640 "name": "nvme0", 00:19:41.640 "dhchap_key": "key2", 00:19:41.640 "dhchap_ctrlr_key": "key0", 00:19:41.640 "method": "bdev_nvme_set_keys", 00:19:41.640 "req_id": 1 00:19:41.640 } 00:19:41.640 Got JSON-RPC error response 00:19:41.640 response: 00:19:41.640 { 00:19:41.640 "code": -13, 00:19:41.640 "message": "Permission denied" 00:19:41.640 } 00:19:41.640 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:19:41.640 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.640 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.640 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.640 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:41.640 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:41.640 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.900 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:41.900 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:42.842 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:42.842 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:42.842 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 901974 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 901974 ']' 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 901974 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 901974 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 901974' 00:19:43.103 killing process with pid 901974 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 901974 00:19:43.103 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 901974 00:19:43.362 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:43.362 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:43.362 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:43.362 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:43.362 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:43.362 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:43.362 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:43.363 rmmod nvme_tcp 00:19:43.363 rmmod nvme_fabrics 00:19:43.363 rmmod nvme_keyring 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 929017 ']' 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 929017 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 929017 ']' 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 929017 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 929017 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 929017' 00:19:43.363 killing process with pid 929017 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 929017 00:19:43.363 13:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 929017 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.623 13:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.537 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:45.537 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.yHN /tmp/spdk.key-sha256.QUf /tmp/spdk.key-sha384.rlW /tmp/spdk.key-sha512.EM6 /tmp/spdk.key-sha512.uIr /tmp/spdk.key-sha384.k67 /tmp/spdk.key-sha256.nW7 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:45.537 00:19:45.537 real 2m46.370s 00:19:45.537 user 6m9.279s 00:19:45.537 sys 0m25.379s 00:19:45.537 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.537 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.537 ************************************ 00:19:45.537 END TEST nvmf_auth_target 00:19:45.537 ************************************ 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:45.800 ************************************ 00:19:45.800 START TEST nvmf_bdevio_no_huge 00:19:45.800 ************************************ 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:45.800 * Looking for test storage... 00:19:45.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:45.800 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.070 --rc genhtml_branch_coverage=1 00:19:46.070 --rc genhtml_function_coverage=1 00:19:46.070 --rc genhtml_legend=1 00:19:46.070 --rc geninfo_all_blocks=1 00:19:46.070 --rc geninfo_unexecuted_blocks=1 00:19:46.070 00:19:46.070 ' 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.070 --rc genhtml_branch_coverage=1 00:19:46.070 --rc genhtml_function_coverage=1 00:19:46.070 --rc genhtml_legend=1 00:19:46.070 --rc geninfo_all_blocks=1 00:19:46.070 --rc geninfo_unexecuted_blocks=1 00:19:46.070 00:19:46.070 ' 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.070 --rc genhtml_branch_coverage=1 00:19:46.070 --rc genhtml_function_coverage=1 00:19:46.070 --rc genhtml_legend=1 00:19:46.070 --rc geninfo_all_blocks=1 00:19:46.070 --rc geninfo_unexecuted_blocks=1 00:19:46.070 00:19:46.070 ' 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:46.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.070 --rc genhtml_branch_coverage=1 00:19:46.070 --rc genhtml_function_coverage=1 00:19:46.070 --rc genhtml_legend=1 00:19:46.070 --rc geninfo_all_blocks=1 00:19:46.070 --rc geninfo_unexecuted_blocks=1 00:19:46.070 00:19:46.070 ' 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.070 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:46.071 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:46.071 13:24:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:54.352 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:19:54.353 Found 0000:31:00.0 (0x8086 - 0x159b) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:19:54.353 Found 0000:31:00.1 (0x8086 - 0x159b) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:19:54.353 Found net devices under 0000:31:00.0: cvl_0_0 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:19:54.353 Found net devices under 0000:31:00.1: cvl_0_1 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:54.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:54.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:19:54.353 00:19:54.353 --- 10.0.0.2 ping statistics --- 00:19:54.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.353 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:54.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:54.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:19:54.353 00:19:54.353 --- 10.0.0.1 ping statistics --- 00:19:54.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:54.353 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:54.353 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=938423 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 938423 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 938423 ']' 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.614 13:24:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:54.614 [2024-12-05 13:24:16.991849] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:19:54.614 [2024-12-05 13:24:16.991959] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:54.614 [2024-12-05 13:24:17.111909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:54.614 [2024-12-05 13:24:17.171911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:54.614 [2024-12-05 13:24:17.171959] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:54.614 [2024-12-05 13:24:17.171968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:54.614 [2024-12-05 13:24:17.171975] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:54.614 [2024-12-05 13:24:17.171982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:54.614 [2024-12-05 13:24:17.173501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:54.614 [2024-12-05 13:24:17.173750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:54.614 [2024-12-05 13:24:17.173968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:54.614 [2024-12-05 13:24:17.174067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.559 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.559 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:19:55.559 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 [2024-12-05 13:24:17.874564] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 Malloc0 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 [2024-12-05 13:24:17.928610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:55.560 { 00:19:55.560 "params": { 00:19:55.560 "name": "Nvme$subsystem", 00:19:55.560 "trtype": "$TEST_TRANSPORT", 00:19:55.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.560 "adrfam": "ipv4", 00:19:55.560 "trsvcid": "$NVMF_PORT", 00:19:55.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.560 "hdgst": ${hdgst:-false}, 00:19:55.560 "ddgst": ${ddgst:-false} 00:19:55.560 }, 00:19:55.560 "method": "bdev_nvme_attach_controller" 00:19:55.560 } 00:19:55.560 EOF 00:19:55.560 )") 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:55.560 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:55.560 "params": { 00:19:55.560 "name": "Nvme1", 00:19:55.560 "trtype": "tcp", 00:19:55.560 "traddr": "10.0.0.2", 00:19:55.560 "adrfam": "ipv4", 00:19:55.560 "trsvcid": "4420", 00:19:55.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.560 "hdgst": false, 00:19:55.560 "ddgst": false 00:19:55.560 }, 00:19:55.560 "method": "bdev_nvme_attach_controller" 00:19:55.560 }' 00:19:55.560 [2024-12-05 13:24:17.997163] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:19:55.560 [2024-12-05 13:24:17.997236] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid938697 ] 00:19:55.560 [2024-12-05 13:24:18.088042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:55.822 [2024-12-05 13:24:18.143708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.822 [2024-12-05 13:24:18.143825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.822 [2024-12-05 13:24:18.143827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.822 I/O targets: 00:19:55.822 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:55.822 00:19:55.822 00:19:55.822 CUnit - A unit testing framework for C - Version 2.1-3 00:19:55.822 http://cunit.sourceforge.net/ 00:19:55.822 00:19:55.822 00:19:55.822 Suite: bdevio tests on: Nvme1n1 00:19:55.822 Test: blockdev write read block ...passed 00:19:56.083 Test: blockdev write zeroes read block ...passed 00:19:56.083 Test: blockdev write zeroes read no split ...passed 00:19:56.083 Test: blockdev write zeroes read split ...passed 00:19:56.083 Test: blockdev write zeroes read split partial ...passed 00:19:56.083 Test: blockdev reset ...[2024-12-05 13:24:18.448025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:56.083 [2024-12-05 13:24:18.448093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189f70 (9): Bad file descriptor 00:19:56.083 [2024-12-05 13:24:18.557438] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:56.083 passed 00:19:56.083 Test: blockdev write read 8 blocks ...passed 00:19:56.083 Test: blockdev write read size > 128k ...passed 00:19:56.083 Test: blockdev write read invalid size ...passed 00:19:56.344 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:56.344 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:56.344 Test: blockdev write read max offset ...passed 00:19:56.344 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:56.344 Test: blockdev writev readv 8 blocks ...passed 00:19:56.344 Test: blockdev writev readv 30 x 1block ...passed 00:19:56.344 Test: blockdev writev readv block ...passed 00:19:56.344 Test: blockdev writev readv size > 128k ...passed 00:19:56.344 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:56.344 Test: blockdev comparev and writev ...[2024-12-05 13:24:18.901489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.901514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:56.344 [2024-12-05 13:24:18.901525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.901532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:56.344 [2024-12-05 13:24:18.901912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.901922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:56.344 [2024-12-05 13:24:18.901932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.901938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:56.344 [2024-12-05 13:24:18.902279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.902288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:56.344 [2024-12-05 13:24:18.902298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:56.344 [2024-12-05 13:24:18.902628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.902637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:56.344 [2024-12-05 13:24:18.902647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:56.344 [2024-12-05 13:24:18.902652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:56.605 passed 00:19:56.605 Test: blockdev nvme passthru rw ...passed 00:19:56.605 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:24:18.987455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.605 [2024-12-05 13:24:18.987466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:56.605 [2024-12-05 13:24:18.987716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.605 [2024-12-05 13:24:18.987724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:56.605 [2024-12-05 13:24:18.987934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.605 [2024-12-05 13:24:18.987947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:56.605 [2024-12-05 13:24:18.988176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:56.605 [2024-12-05 13:24:18.988184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:56.605 passed 00:19:56.605 Test: blockdev nvme admin passthru ...passed 00:19:56.605 Test: blockdev copy ...passed 00:19:56.605 00:19:56.605 Run Summary: Type Total Ran Passed Failed Inactive 00:19:56.605 suites 1 1 n/a 0 0 00:19:56.605 tests 23 23 23 0 0 00:19:56.605 asserts 152 152 152 0 n/a 00:19:56.605 00:19:56.605 Elapsed time = 1.478 seconds 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.866 rmmod nvme_tcp 00:19:56.866 rmmod nvme_fabrics 00:19:56.866 rmmod nvme_keyring 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 938423 ']' 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 938423 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 938423 ']' 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 938423 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.866 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 938423 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 938423' 00:19:57.128 killing process with pid 938423 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 938423 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 938423 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.128 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.389 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.389 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.389 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.389 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.389 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:59.303 00:19:59.303 real 0m13.596s 00:19:59.303 user 0m14.734s 00:19:59.303 sys 0m7.387s 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:59.303 ************************************ 00:19:59.303 END TEST nvmf_bdevio_no_huge 00:19:59.303 ************************************ 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.303 ************************************ 00:19:59.303 START TEST nvmf_tls 00:19:59.303 ************************************ 00:19:59.303 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:59.566 * Looking for test storage... 00:19:59.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.566 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:59.566 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:19:59.566 13:24:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:59.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.566 --rc genhtml_branch_coverage=1 00:19:59.566 --rc genhtml_function_coverage=1 00:19:59.566 --rc genhtml_legend=1 00:19:59.566 --rc geninfo_all_blocks=1 00:19:59.566 --rc geninfo_unexecuted_blocks=1 00:19:59.566 00:19:59.566 ' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:59.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.566 --rc genhtml_branch_coverage=1 00:19:59.566 --rc genhtml_function_coverage=1 00:19:59.566 --rc genhtml_legend=1 00:19:59.566 --rc geninfo_all_blocks=1 00:19:59.566 --rc geninfo_unexecuted_blocks=1 00:19:59.566 00:19:59.566 ' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:59.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.566 --rc genhtml_branch_coverage=1 00:19:59.566 --rc genhtml_function_coverage=1 00:19:59.566 --rc genhtml_legend=1 00:19:59.566 --rc geninfo_all_blocks=1 00:19:59.566 --rc geninfo_unexecuted_blocks=1 00:19:59.566 00:19:59.566 ' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:59.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.566 --rc genhtml_branch_coverage=1 00:19:59.566 --rc genhtml_function_coverage=1 00:19:59.566 --rc genhtml_legend=1 00:19:59.566 --rc geninfo_all_blocks=1 00:19:59.566 --rc geninfo_unexecuted_blocks=1 00:19:59.566 00:19:59.566 ' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.566 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:59.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:59.567 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:09.576 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:09.576 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:09.576 Found net devices under 0000:31:00.0: cvl_0_0 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:09.576 Found net devices under 0000:31:00.1: cvl_0_1 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.576 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:20:09.577 00:20:09.577 --- 10.0.0.2 ping statistics --- 00:20:09.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.577 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:20:09.577 00:20:09.577 --- 10.0.0.1 ping statistics --- 00:20:09.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.577 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=943799 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 943799 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 943799 ']' 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.577 13:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.577 [2024-12-05 13:24:30.724835] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:09.577 [2024-12-05 13:24:30.724896] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.577 [2024-12-05 13:24:30.829006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.577 [2024-12-05 13:24:30.868465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.577 [2024-12-05 13:24:30.868509] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.577 [2024-12-05 13:24:30.868518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.577 [2024-12-05 13:24:30.868524] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.577 [2024-12-05 13:24:30.868530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.577 [2024-12-05 13:24:30.869270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:09.577 true 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:09.577 13:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:09.577 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:09.577 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:09.838 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:09.838 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:09.838 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:10.099 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.099 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:10.359 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:10.359 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:10.359 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.359 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:10.359 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:10.359 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:10.359 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:10.619 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.619 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:10.880 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:10.880 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:10.880 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:10.880 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:10.880 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ZF5td0tAl1 00:20:11.141 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:11.401 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.4Xq2t1xmGo 00:20:11.401 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:11.401 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:11.401 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ZF5td0tAl1 00:20:11.401 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.4Xq2t1xmGo 00:20:11.401 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:11.401 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:11.661 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ZF5td0tAl1 00:20:11.661 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ZF5td0tAl1 00:20:11.661 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:11.920 [2024-12-05 13:24:34.296950] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.920 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:11.920 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:12.180 [2024-12-05 13:24:34.633768] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.180 [2024-12-05 13:24:34.633983] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.180 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:12.440 malloc0 00:20:12.440 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:12.440 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ZF5td0tAl1 00:20:12.702 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:12.962 13:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ZF5td0tAl1 00:20:22.961 Initializing NVMe Controllers 00:20:22.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:22.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:22.961 Initialization complete. Launching workers. 00:20:22.961 ======================================================== 00:20:22.961 Latency(us) 00:20:22.961 Device Information : IOPS MiB/s Average min max 00:20:22.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18713.55 73.10 3420.02 1098.62 4117.09 00:20:22.961 ======================================================== 00:20:22.961 Total : 18713.55 73.10 3420.02 1098.62 4117.09 00:20:22.961 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZF5td0tAl1 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZF5td0tAl1 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=946543 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 946543 /var/tmp/bdevperf.sock 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 946543 ']' 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.961 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.961 [2024-12-05 13:24:45.489884] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:22.961 [2024-12-05 13:24:45.489940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946543 ] 00:20:23.221 [2024-12-05 13:24:45.554194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.221 [2024-12-05 13:24:45.583211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.221 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.221 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:23.221 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZF5td0tAl1 00:20:23.481 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:23.481 [2024-12-05 13:24:45.973482] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.741 TLSTESTn1 00:20:23.741 13:24:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:23.741 Running I/O for 10 seconds... 00:20:25.623 5039.00 IOPS, 19.68 MiB/s [2024-12-05T12:24:49.578Z] 5722.00 IOPS, 22.35 MiB/s [2024-12-05T12:24:50.520Z] 5878.00 IOPS, 22.96 MiB/s [2024-12-05T12:24:51.461Z] 5877.75 IOPS, 22.96 MiB/s [2024-12-05T12:24:52.412Z] 5862.00 IOPS, 22.90 MiB/s [2024-12-05T12:24:53.354Z] 5836.50 IOPS, 22.80 MiB/s [2024-12-05T12:24:54.294Z] 5934.00 IOPS, 23.18 MiB/s [2024-12-05T12:24:55.235Z] 5840.00 IOPS, 22.81 MiB/s [2024-12-05T12:24:56.615Z] 5786.33 IOPS, 22.60 MiB/s [2024-12-05T12:24:56.615Z] 5849.60 IOPS, 22.85 MiB/s 00:20:34.047 Latency(us) 00:20:34.047 [2024-12-05T12:24:56.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.047 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:34.047 Verification LBA range: start 0x0 length 0x2000 00:20:34.047 TLSTESTn1 : 10.02 5853.09 22.86 0.00 0.00 21834.55 4642.13 29272.75 00:20:34.047 [2024-12-05T12:24:56.615Z] =================================================================================================================== 00:20:34.047 [2024-12-05T12:24:56.615Z] Total : 5853.09 22.86 0.00 0.00 21834.55 4642.13 29272.75 00:20:34.047 { 00:20:34.047 "results": [ 00:20:34.047 { 00:20:34.047 "job": "TLSTESTn1", 00:20:34.047 "core_mask": "0x4", 00:20:34.047 "workload": "verify", 00:20:34.047 "status": "finished", 00:20:34.047 "verify_range": { 00:20:34.047 "start": 0, 00:20:34.047 "length": 8192 00:20:34.047 }, 00:20:34.047 "queue_depth": 128, 00:20:34.047 "io_size": 4096, 00:20:34.047 "runtime": 10.015392, 00:20:34.047 "iops": 5853.090922452162, 00:20:34.047 "mibps": 22.863636415828758, 00:20:34.047 "io_failed": 0, 00:20:34.047 "io_timeout": 0, 00:20:34.047 "avg_latency_us": 21834.545939964628, 00:20:34.047 "min_latency_us": 4642.133333333333, 00:20:34.047 "max_latency_us": 29272.746666666666 00:20:34.047 } 00:20:34.047 ], 00:20:34.048 "core_count": 1 00:20:34.048 } 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 946543 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 946543 ']' 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 946543 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 946543 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 946543' 00:20:34.048 killing process with pid 946543 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 946543 00:20:34.048 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.048 00:20:34.048 Latency(us) 00:20:34.048 [2024-12-05T12:24:56.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.048 [2024-12-05T12:24:56.616Z] =================================================================================================================== 00:20:34.048 [2024-12-05T12:24:56.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 946543 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4Xq2t1xmGo 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4Xq2t1xmGo 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4Xq2t1xmGo 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4Xq2t1xmGo 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=948668 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 948668 /var/tmp/bdevperf.sock 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 948668 ']' 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.048 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.048 [2024-12-05 13:24:56.448039] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:34.048 [2024-12-05 13:24:56.448097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948668 ] 00:20:34.048 [2024-12-05 13:24:56.511119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.048 [2024-12-05 13:24:56.539867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.308 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.308 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.308 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4Xq2t1xmGo 00:20:34.308 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:34.569 [2024-12-05 13:24:56.922336] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:34.569 [2024-12-05 13:24:56.928611] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:34.569 [2024-12-05 13:24:56.929517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614990 (107): Transport endpoint is not connected 00:20:34.569 [2024-12-05 13:24:56.930513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614990 (9): Bad file descriptor 00:20:34.569 [2024-12-05 13:24:56.931515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:34.569 [2024-12-05 13:24:56.931523] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:34.569 [2024-12-05 13:24:56.931529] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:34.569 [2024-12-05 13:24:56.931535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:34.569 request: 00:20:34.569 { 00:20:34.569 "name": "TLSTEST", 00:20:34.569 "trtype": "tcp", 00:20:34.569 "traddr": "10.0.0.2", 00:20:34.569 "adrfam": "ipv4", 00:20:34.569 "trsvcid": "4420", 00:20:34.569 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.569 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.569 "prchk_reftag": false, 00:20:34.569 "prchk_guard": false, 00:20:34.569 "hdgst": false, 00:20:34.569 "ddgst": false, 00:20:34.569 "psk": "key0", 00:20:34.569 "allow_unrecognized_csi": false, 00:20:34.569 "method": "bdev_nvme_attach_controller", 00:20:34.569 "req_id": 1 00:20:34.569 } 00:20:34.569 Got JSON-RPC error response 00:20:34.569 response: 00:20:34.569 { 00:20:34.569 "code": -5, 00:20:34.569 "message": "Input/output error" 00:20:34.569 } 00:20:34.569 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 948668 00:20:34.569 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 948668 ']' 00:20:34.569 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 948668 00:20:34.569 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:34.569 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:34.569 13:24:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948668 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948668' 00:20:34.569 killing process with pid 948668 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 948668 00:20:34.569 Received shutdown signal, test time was about 10.000000 seconds 00:20:34.569 00:20:34.569 Latency(us) 00:20:34.569 [2024-12-05T12:24:57.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.569 [2024-12-05T12:24:57.137Z] =================================================================================================================== 00:20:34.569 [2024-12-05T12:24:57.137Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 948668 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZF5td0tAl1 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZF5td0tAl1 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:34.569 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ZF5td0tAl1 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZF5td0tAl1 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=948894 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 948894 /var/tmp/bdevperf.sock 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 948894 ']' 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:34.570 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.830 [2024-12-05 13:24:57.168092] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:34.830 [2024-12-05 13:24:57.168150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948894 ] 00:20:34.830 [2024-12-05 13:24:57.231366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.830 [2024-12-05 13:24:57.260148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:34.830 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:34.830 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:34.830 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZF5td0tAl1 00:20:35.090 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:35.090 [2024-12-05 13:24:57.646404] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.090 [2024-12-05 13:24:57.650796] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.090 [2024-12-05 13:24:57.650817] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:35.090 [2024-12-05 13:24:57.650837] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.090 [2024-12-05 13:24:57.651476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eb990 (107): Transport endpoint is not connected 00:20:35.090 [2024-12-05 13:24:57.652471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19eb990 (9): Bad file descriptor 00:20:35.090 [2024-12-05 13:24:57.653473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:35.090 [2024-12-05 13:24:57.653481] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.090 [2024-12-05 13:24:57.653487] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:35.090 [2024-12-05 13:24:57.653493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:35.090 request: 00:20:35.090 { 00:20:35.090 "name": "TLSTEST", 00:20:35.090 "trtype": "tcp", 00:20:35.090 "traddr": "10.0.0.2", 00:20:35.090 "adrfam": "ipv4", 00:20:35.090 "trsvcid": "4420", 00:20:35.090 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:35.090 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:35.090 "prchk_reftag": false, 00:20:35.090 "prchk_guard": false, 00:20:35.090 "hdgst": false, 00:20:35.090 "ddgst": false, 00:20:35.090 "psk": "key0", 00:20:35.090 "allow_unrecognized_csi": false, 00:20:35.090 "method": "bdev_nvme_attach_controller", 00:20:35.090 "req_id": 1 00:20:35.090 } 00:20:35.090 Got JSON-RPC error response 00:20:35.090 response: 00:20:35.090 { 00:20:35.090 "code": -5, 00:20:35.090 "message": "Input/output error" 00:20:35.090 } 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 948894 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 948894 ']' 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 948894 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948894 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948894' 00:20:35.352 killing process with pid 948894 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 948894 00:20:35.352 Received shutdown signal, test time was about 10.000000 seconds 00:20:35.352 00:20:35.352 Latency(us) 00:20:35.352 [2024-12-05T12:24:57.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.352 [2024-12-05T12:24:57.920Z] =================================================================================================================== 00:20:35.352 [2024-12-05T12:24:57.920Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 948894 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZF5td0tAl1 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZF5td0tAl1 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZF5td0tAl1 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZF5td0tAl1 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=948916 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 948916 /var/tmp/bdevperf.sock 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 948916 ']' 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:35.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.352 13:24:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:35.352 [2024-12-05 13:24:57.884470] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:35.352 [2024-12-05 13:24:57.884529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid948916 ] 00:20:35.614 [2024-12-05 13:24:57.949426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:35.614 [2024-12-05 13:24:57.978315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.614 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.614 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:35.614 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZF5td0tAl1 00:20:35.875 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:35.875 [2024-12-05 13:24:58.396647] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.875 [2024-12-05 13:24:58.402086] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:35.875 [2024-12-05 13:24:58.402105] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:35.875 [2024-12-05 13:24:58.402125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:35.875 [2024-12-05 13:24:58.402859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee990 (107): Transport endpoint is not connected 00:20:35.875 [2024-12-05 13:24:58.403855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ee990 (9): Bad file descriptor 00:20:35.875 [2024-12-05 13:24:58.404857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:35.875 [2024-12-05 13:24:58.404869] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:35.875 [2024-12-05 13:24:58.404875] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:35.875 [2024-12-05 13:24:58.404882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:35.875 request: 00:20:35.875 { 00:20:35.875 "name": "TLSTEST", 00:20:35.875 "trtype": "tcp", 00:20:35.875 "traddr": "10.0.0.2", 00:20:35.875 "adrfam": "ipv4", 00:20:35.875 "trsvcid": "4420", 00:20:35.875 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:35.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:35.875 "prchk_reftag": false, 00:20:35.875 "prchk_guard": false, 00:20:35.875 "hdgst": false, 00:20:35.875 "ddgst": false, 00:20:35.875 "psk": "key0", 00:20:35.875 "allow_unrecognized_csi": false, 00:20:35.875 "method": "bdev_nvme_attach_controller", 00:20:35.875 "req_id": 1 00:20:35.875 } 00:20:35.875 Got JSON-RPC error response 00:20:35.875 response: 00:20:35.875 { 00:20:35.875 "code": -5, 00:20:35.875 "message": "Input/output error" 00:20:35.875 } 00:20:35.875 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 948916 00:20:35.875 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 948916 ']' 00:20:35.876 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 948916 00:20:35.876 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 948916 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 948916' 00:20:36.137 killing process with pid 948916 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 948916 00:20:36.137 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.137 00:20:36.137 Latency(us) 00:20:36.137 [2024-12-05T12:24:58.705Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.137 [2024-12-05T12:24:58.705Z] =================================================================================================================== 00:20:36.137 [2024-12-05T12:24:58.705Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 948916 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=949239 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 949239 /var/tmp/bdevperf.sock 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 949239 ']' 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.137 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.137 [2024-12-05 13:24:58.655075] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:36.137 [2024-12-05 13:24:58.655128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949239 ] 00:20:36.398 [2024-12-05 13:24:58.718948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.398 [2024-12-05 13:24:58.746878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.398 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:36.398 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:36.398 13:24:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:36.659 [2024-12-05 13:24:58.984573] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:36.659 [2024-12-05 13:24:58.984603] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:36.659 request: 00:20:36.659 { 00:20:36.659 "name": "key0", 00:20:36.659 "path": "", 00:20:36.659 "method": "keyring_file_add_key", 00:20:36.659 "req_id": 1 00:20:36.659 } 00:20:36.659 Got JSON-RPC error response 00:20:36.659 response: 00:20:36.659 { 00:20:36.659 "code": -1, 00:20:36.659 "message": "Operation not permitted" 00:20:36.659 } 00:20:36.659 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:36.659 [2024-12-05 13:24:59.161097] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:36.659 [2024-12-05 13:24:59.161123] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:36.659 request: 00:20:36.659 { 00:20:36.659 "name": "TLSTEST", 00:20:36.659 "trtype": "tcp", 00:20:36.659 "traddr": "10.0.0.2", 00:20:36.659 "adrfam": "ipv4", 00:20:36.659 "trsvcid": "4420", 00:20:36.659 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.659 "prchk_reftag": false, 00:20:36.659 "prchk_guard": false, 00:20:36.659 "hdgst": false, 00:20:36.659 "ddgst": false, 00:20:36.659 "psk": "key0", 00:20:36.659 "allow_unrecognized_csi": false, 00:20:36.659 "method": "bdev_nvme_attach_controller", 00:20:36.659 "req_id": 1 00:20:36.659 } 00:20:36.659 Got JSON-RPC error response 00:20:36.659 response: 00:20:36.659 { 00:20:36.659 "code": -126, 00:20:36.659 "message": "Required key not available" 00:20:36.659 } 00:20:36.659 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 949239 00:20:36.659 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 949239 ']' 00:20:36.659 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 949239 00:20:36.659 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:36.660 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.660 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949239 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949239' 00:20:36.921 killing process with pid 949239 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 949239 00:20:36.921 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.921 00:20:36.921 Latency(us) 00:20:36.921 [2024-12-05T12:24:59.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.921 [2024-12-05T12:24:59.489Z] =================================================================================================================== 00:20:36.921 [2024-12-05T12:24:59.489Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 949239 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 943799 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 943799 ']' 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 943799 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 943799 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 943799' 00:20:36.921 killing process with pid 943799 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 943799 00:20:36.921 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 943799 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zCSsQ53vVc 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zCSsQ53vVc 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=949277 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 949277 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 949277 ']' 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.182 13:24:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.182 [2024-12-05 13:24:59.644252] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:37.182 [2024-12-05 13:24:59.644303] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.182 [2024-12-05 13:24:59.741787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.444 [2024-12-05 13:24:59.770263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.444 [2024-12-05 13:24:59.770294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.444 [2024-12-05 13:24:59.770300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.444 [2024-12-05 13:24:59.770305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.444 [2024-12-05 13:24:59.770309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.444 [2024-12-05 13:24:59.770742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.016 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.016 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:38.017 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.017 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.017 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.017 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.017 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zCSsQ53vVc 00:20:38.017 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zCSsQ53vVc 00:20:38.017 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:38.278 [2024-12-05 13:25:00.619525] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.278 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:38.278 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:38.538 [2024-12-05 13:25:00.940309] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:38.538 [2024-12-05 13:25:00.940510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.538 13:25:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:38.538 malloc0 00:20:38.798 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:38.798 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCSsQ53vVc 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zCSsQ53vVc 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=949761 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 949761 /var/tmp/bdevperf.sock 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 949761 ']' 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:39.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.062 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.062 [2024-12-05 13:25:01.619143] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:39.062 [2024-12-05 13:25:01.619196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949761 ] 00:20:39.324 [2024-12-05 13:25:01.684123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.324 [2024-12-05 13:25:01.712822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.324 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:39.324 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:39.324 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:20:39.584 13:25:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:39.584 [2024-12-05 13:25:02.131425] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:39.843 TLSTESTn1 00:20:39.843 13:25:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:39.843 Running I/O for 10 seconds... 00:20:42.162 4785.00 IOPS, 18.69 MiB/s [2024-12-05T12:25:05.669Z] 5512.00 IOPS, 21.53 MiB/s [2024-12-05T12:25:06.607Z] 5713.00 IOPS, 22.32 MiB/s [2024-12-05T12:25:07.546Z] 5694.25 IOPS, 22.24 MiB/s [2024-12-05T12:25:08.491Z] 5529.80 IOPS, 21.60 MiB/s [2024-12-05T12:25:09.502Z] 5641.50 IOPS, 22.04 MiB/s [2024-12-05T12:25:10.444Z] 5750.14 IOPS, 22.46 MiB/s [2024-12-05T12:25:11.387Z] 5758.38 IOPS, 22.49 MiB/s [2024-12-05T12:25:12.332Z] 5688.22 IOPS, 22.22 MiB/s [2024-12-05T12:25:12.594Z] 5740.00 IOPS, 22.42 MiB/s 00:20:50.026 Latency(us) 00:20:50.026 [2024-12-05T12:25:12.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.026 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:50.026 Verification LBA range: start 0x0 length 0x2000 00:20:50.026 TLSTESTn1 : 10.02 5743.15 22.43 0.00 0.00 22253.29 6280.53 27743.57 00:20:50.026 [2024-12-05T12:25:12.594Z] =================================================================================================================== 00:20:50.026 [2024-12-05T12:25:12.594Z] Total : 5743.15 22.43 0.00 0.00 22253.29 6280.53 27743.57 00:20:50.026 { 00:20:50.026 "results": [ 00:20:50.026 { 00:20:50.026 "job": "TLSTESTn1", 00:20:50.026 "core_mask": "0x4", 00:20:50.026 "workload": "verify", 00:20:50.026 "status": "finished", 00:20:50.026 "verify_range": { 00:20:50.026 "start": 0, 00:20:50.026 "length": 8192 00:20:50.026 }, 00:20:50.026 "queue_depth": 128, 00:20:50.026 "io_size": 4096, 00:20:50.026 "runtime": 10.016451, 00:20:50.026 "iops": 5743.151940742285, 00:20:50.026 "mibps": 22.43418726852455, 00:20:50.026 "io_failed": 0, 00:20:50.026 "io_timeout": 0, 00:20:50.026 "avg_latency_us": 22253.29330042068, 00:20:50.026 "min_latency_us": 6280.533333333334, 00:20:50.026 "max_latency_us": 27743.573333333334 00:20:50.026 } 00:20:50.026 ], 00:20:50.026 "core_count": 1 00:20:50.026 } 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 949761 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 949761 ']' 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 949761 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949761 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949761' 00:20:50.026 killing process with pid 949761 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 949761 00:20:50.026 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.026 00:20:50.026 Latency(us) 00:20:50.026 [2024-12-05T12:25:12.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.026 [2024-12-05T12:25:12.594Z] =================================================================================================================== 00:20:50.026 [2024-12-05T12:25:12.594Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 949761 00:20:50.026 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zCSsQ53vVc 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCSsQ53vVc 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCSsQ53vVc 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zCSsQ53vVc 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zCSsQ53vVc 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=951978 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 951978 /var/tmp/bdevperf.sock 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 951978 ']' 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:50.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.027 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:50.288 [2024-12-05 13:25:12.611511] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:50.288 [2024-12-05 13:25:12.611566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid951978 ] 00:20:50.288 [2024-12-05 13:25:12.676221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:50.288 [2024-12-05 13:25:12.704942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.859 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.859 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:50.859 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:20:51.120 [2024-12-05 13:25:13.544527] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zCSsQ53vVc': 0100666 00:20:51.120 [2024-12-05 13:25:13.544555] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:51.120 request: 00:20:51.120 { 00:20:51.120 "name": "key0", 00:20:51.120 "path": "/tmp/tmp.zCSsQ53vVc", 00:20:51.120 "method": "keyring_file_add_key", 00:20:51.120 "req_id": 1 00:20:51.120 } 00:20:51.120 Got JSON-RPC error response 00:20:51.120 response: 00:20:51.120 { 00:20:51.120 "code": -1, 00:20:51.120 "message": "Operation not permitted" 00:20:51.120 } 00:20:51.120 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:51.380 [2024-12-05 13:25:13.729065] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:51.380 [2024-12-05 13:25:13.729093] bdev_nvme.c:6722:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:51.380 request: 00:20:51.380 { 00:20:51.380 "name": "TLSTEST", 00:20:51.380 "trtype": "tcp", 00:20:51.380 "traddr": "10.0.0.2", 00:20:51.380 "adrfam": "ipv4", 00:20:51.380 "trsvcid": "4420", 00:20:51.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:51.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:51.380 "prchk_reftag": false, 00:20:51.380 "prchk_guard": false, 00:20:51.380 "hdgst": false, 00:20:51.380 "ddgst": false, 00:20:51.380 "psk": "key0", 00:20:51.380 "allow_unrecognized_csi": false, 00:20:51.380 "method": "bdev_nvme_attach_controller", 00:20:51.380 "req_id": 1 00:20:51.380 } 00:20:51.380 Got JSON-RPC error response 00:20:51.380 response: 00:20:51.380 { 00:20:51.380 "code": -126, 00:20:51.380 "message": "Required key not available" 00:20:51.380 } 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 951978 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 951978 ']' 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 951978 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 951978 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 951978' 00:20:51.380 killing process with pid 951978 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 951978 00:20:51.380 Received shutdown signal, test time was about 10.000000 seconds 00:20:51.380 00:20:51.380 Latency(us) 00:20:51.380 [2024-12-05T12:25:13.948Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.380 [2024-12-05T12:25:13.948Z] =================================================================================================================== 00:20:51.380 [2024-12-05T12:25:13.948Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 951978 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 949277 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 949277 ']' 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 949277 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.380 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 949277 00:20:51.641 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:51.641 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:51.641 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 949277' 00:20:51.641 killing process with pid 949277 00:20:51.641 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 949277 00:20:51.641 13:25:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 949277 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=952306 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 952306 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 952306 ']' 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.641 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.641 [2024-12-05 13:25:14.161723] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:51.641 [2024-12-05 13:25:14.161776] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.902 [2024-12-05 13:25:14.259134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.902 [2024-12-05 13:25:14.289225] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.902 [2024-12-05 13:25:14.289257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.902 [2024-12-05 13:25:14.289263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.902 [2024-12-05 13:25:14.289268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.902 [2024-12-05 13:25:14.289272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.902 [2024-12-05 13:25:14.289717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zCSsQ53vVc 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zCSsQ53vVc 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.zCSsQ53vVc 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zCSsQ53vVc 00:20:52.474 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:52.735 [2024-12-05 13:25:15.147504] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.735 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:52.995 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:52.995 [2024-12-05 13:25:15.472303] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:52.995 [2024-12-05 13:25:15.472504] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:52.995 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.256 malloc0 00:20:53.256 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:53.516 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:20:53.516 [2024-12-05 13:25:15.979312] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zCSsQ53vVc': 0100666 00:20:53.517 [2024-12-05 13:25:15.979332] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:53.517 request: 00:20:53.517 { 00:20:53.517 "name": "key0", 00:20:53.517 "path": "/tmp/tmp.zCSsQ53vVc", 00:20:53.517 "method": "keyring_file_add_key", 00:20:53.517 "req_id": 1 00:20:53.517 } 00:20:53.517 Got JSON-RPC error response 00:20:53.517 response: 00:20:53.517 { 00:20:53.517 "code": -1, 00:20:53.517 "message": "Operation not permitted" 00:20:53.517 } 00:20:53.517 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:53.778 [2024-12-05 13:25:16.147748] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:53.778 [2024-12-05 13:25:16.147774] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:53.778 request: 00:20:53.778 { 00:20:53.778 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.778 "host": "nqn.2016-06.io.spdk:host1", 00:20:53.778 "psk": "key0", 00:20:53.778 "method": "nvmf_subsystem_add_host", 00:20:53.778 "req_id": 1 00:20:53.778 } 00:20:53.778 Got JSON-RPC error response 00:20:53.778 response: 00:20:53.778 { 00:20:53.778 "code": -32603, 00:20:53.778 "message": "Internal error" 00:20:53.778 } 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 952306 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 952306 ']' 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 952306 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 952306 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 952306' 00:20:53.778 killing process with pid 952306 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 952306 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 952306 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zCSsQ53vVc 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:53.778 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=952708 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 952708 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 952708 ']' 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.039 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.040 13:25:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.040 [2024-12-05 13:25:16.406532] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:54.040 [2024-12-05 13:25:16.406592] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:54.040 [2024-12-05 13:25:16.505120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.040 [2024-12-05 13:25:16.534498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:54.040 [2024-12-05 13:25:16.534527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:54.040 [2024-12-05 13:25:16.534532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:54.040 [2024-12-05 13:25:16.534537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:54.040 [2024-12-05 13:25:16.534541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:54.040 [2024-12-05 13:25:16.535008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zCSsQ53vVc 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zCSsQ53vVc 00:20:54.981 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.982 [2024-12-05 13:25:17.375696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.982 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.242 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.242 [2024-12-05 13:25:17.696482] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.242 [2024-12-05 13:25:17.696688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.242 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.502 malloc0 00:20:55.502 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.502 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:20:55.762 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=953071 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 953071 /var/tmp/bdevperf.sock 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 953071 ']' 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:56.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:56.023 [2024-12-05 13:25:18.374385] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:56.023 [2024-12-05 13:25:18.374427] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953071 ] 00:20:56.023 [2024-12-05 13:25:18.428603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.023 [2024-12-05 13:25:18.457234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:56.023 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:20:56.284 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:56.544 [2024-12-05 13:25:18.871416] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:56.544 TLSTESTn1 00:20:56.544 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:56.805 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:56.805 "subsystems": [ 00:20:56.805 { 00:20:56.805 "subsystem": "keyring", 00:20:56.805 "config": [ 00:20:56.805 { 00:20:56.805 "method": "keyring_file_add_key", 00:20:56.805 "params": { 00:20:56.805 "name": "key0", 00:20:56.805 "path": "/tmp/tmp.zCSsQ53vVc" 00:20:56.805 } 00:20:56.805 } 00:20:56.805 ] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "iobuf", 00:20:56.805 "config": [ 00:20:56.805 { 00:20:56.805 "method": "iobuf_set_options", 00:20:56.805 "params": { 00:20:56.805 "small_pool_count": 8192, 00:20:56.805 "large_pool_count": 1024, 00:20:56.805 "small_bufsize": 8192, 00:20:56.805 "large_bufsize": 135168, 00:20:56.805 "enable_numa": false 00:20:56.805 } 00:20:56.805 } 00:20:56.805 ] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "sock", 00:20:56.805 "config": [ 00:20:56.805 { 00:20:56.805 "method": "sock_set_default_impl", 00:20:56.805 "params": { 00:20:56.805 "impl_name": "posix" 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "sock_impl_set_options", 00:20:56.805 "params": { 00:20:56.805 "impl_name": "ssl", 00:20:56.805 "recv_buf_size": 4096, 00:20:56.805 "send_buf_size": 4096, 00:20:56.805 "enable_recv_pipe": true, 00:20:56.805 "enable_quickack": false, 00:20:56.805 "enable_placement_id": 0, 00:20:56.805 "enable_zerocopy_send_server": true, 00:20:56.805 "enable_zerocopy_send_client": false, 00:20:56.805 "zerocopy_threshold": 0, 00:20:56.805 "tls_version": 0, 00:20:56.805 "enable_ktls": false 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "sock_impl_set_options", 00:20:56.805 "params": { 00:20:56.805 "impl_name": "posix", 00:20:56.805 "recv_buf_size": 2097152, 00:20:56.805 "send_buf_size": 2097152, 00:20:56.805 "enable_recv_pipe": true, 00:20:56.805 "enable_quickack": false, 00:20:56.805 "enable_placement_id": 0, 00:20:56.805 "enable_zerocopy_send_server": true, 00:20:56.805 "enable_zerocopy_send_client": false, 00:20:56.805 "zerocopy_threshold": 0, 00:20:56.805 "tls_version": 0, 00:20:56.805 "enable_ktls": false 00:20:56.805 } 00:20:56.805 } 00:20:56.805 ] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "vmd", 00:20:56.805 "config": [] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "accel", 00:20:56.805 "config": [ 00:20:56.805 { 00:20:56.805 "method": "accel_set_options", 00:20:56.805 "params": { 00:20:56.805 "small_cache_size": 128, 00:20:56.805 "large_cache_size": 16, 00:20:56.805 "task_count": 2048, 00:20:56.805 "sequence_count": 2048, 00:20:56.805 "buf_count": 2048 00:20:56.805 } 00:20:56.805 } 00:20:56.805 ] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "bdev", 00:20:56.805 "config": [ 00:20:56.805 { 00:20:56.805 "method": "bdev_set_options", 00:20:56.805 "params": { 00:20:56.805 "bdev_io_pool_size": 65535, 00:20:56.805 "bdev_io_cache_size": 256, 00:20:56.805 "bdev_auto_examine": true, 00:20:56.805 "iobuf_small_cache_size": 128, 00:20:56.805 "iobuf_large_cache_size": 16 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "bdev_raid_set_options", 00:20:56.805 "params": { 00:20:56.805 "process_window_size_kb": 1024, 00:20:56.805 "process_max_bandwidth_mb_sec": 0 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "bdev_iscsi_set_options", 00:20:56.805 "params": { 00:20:56.805 "timeout_sec": 30 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "bdev_nvme_set_options", 00:20:56.805 "params": { 00:20:56.805 "action_on_timeout": "none", 00:20:56.805 "timeout_us": 0, 00:20:56.805 "timeout_admin_us": 0, 00:20:56.805 "keep_alive_timeout_ms": 10000, 00:20:56.805 "arbitration_burst": 0, 00:20:56.805 "low_priority_weight": 0, 00:20:56.805 "medium_priority_weight": 0, 00:20:56.805 "high_priority_weight": 0, 00:20:56.805 "nvme_adminq_poll_period_us": 10000, 00:20:56.805 "nvme_ioq_poll_period_us": 0, 00:20:56.805 "io_queue_requests": 0, 00:20:56.805 "delay_cmd_submit": true, 00:20:56.805 "transport_retry_count": 4, 00:20:56.805 "bdev_retry_count": 3, 00:20:56.805 "transport_ack_timeout": 0, 00:20:56.805 "ctrlr_loss_timeout_sec": 0, 00:20:56.805 "reconnect_delay_sec": 0, 00:20:56.805 "fast_io_fail_timeout_sec": 0, 00:20:56.805 "disable_auto_failback": false, 00:20:56.805 "generate_uuids": false, 00:20:56.805 "transport_tos": 0, 00:20:56.805 "nvme_error_stat": false, 00:20:56.805 "rdma_srq_size": 0, 00:20:56.805 "io_path_stat": false, 00:20:56.805 "allow_accel_sequence": false, 00:20:56.805 "rdma_max_cq_size": 0, 00:20:56.805 "rdma_cm_event_timeout_ms": 0, 00:20:56.805 "dhchap_digests": [ 00:20:56.805 "sha256", 00:20:56.805 "sha384", 00:20:56.805 "sha512" 00:20:56.805 ], 00:20:56.805 "dhchap_dhgroups": [ 00:20:56.805 "null", 00:20:56.805 "ffdhe2048", 00:20:56.805 "ffdhe3072", 00:20:56.805 "ffdhe4096", 00:20:56.805 "ffdhe6144", 00:20:56.805 "ffdhe8192" 00:20:56.805 ] 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "bdev_nvme_set_hotplug", 00:20:56.805 "params": { 00:20:56.805 "period_us": 100000, 00:20:56.805 "enable": false 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "bdev_malloc_create", 00:20:56.805 "params": { 00:20:56.805 "name": "malloc0", 00:20:56.805 "num_blocks": 8192, 00:20:56.805 "block_size": 4096, 00:20:56.805 "physical_block_size": 4096, 00:20:56.805 "uuid": "ee29ae70-3c7c-4f45-81aa-684517ab840b", 00:20:56.805 "optimal_io_boundary": 0, 00:20:56.805 "md_size": 0, 00:20:56.805 "dif_type": 0, 00:20:56.805 "dif_is_head_of_md": false, 00:20:56.805 "dif_pi_format": 0 00:20:56.805 } 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "method": "bdev_wait_for_examine" 00:20:56.805 } 00:20:56.805 ] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "nbd", 00:20:56.805 "config": [] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "scheduler", 00:20:56.805 "config": [ 00:20:56.805 { 00:20:56.805 "method": "framework_set_scheduler", 00:20:56.805 "params": { 00:20:56.805 "name": "static" 00:20:56.805 } 00:20:56.805 } 00:20:56.805 ] 00:20:56.805 }, 00:20:56.805 { 00:20:56.805 "subsystem": "nvmf", 00:20:56.805 "config": [ 00:20:56.805 { 00:20:56.805 "method": "nvmf_set_config", 00:20:56.805 "params": { 00:20:56.805 "discovery_filter": "match_any", 00:20:56.805 "admin_cmd_passthru": { 00:20:56.805 "identify_ctrlr": false 00:20:56.805 }, 00:20:56.805 "dhchap_digests": [ 00:20:56.805 "sha256", 00:20:56.805 "sha384", 00:20:56.805 "sha512" 00:20:56.805 ], 00:20:56.805 "dhchap_dhgroups": [ 00:20:56.805 "null", 00:20:56.805 "ffdhe2048", 00:20:56.806 "ffdhe3072", 00:20:56.806 "ffdhe4096", 00:20:56.806 "ffdhe6144", 00:20:56.806 "ffdhe8192" 00:20:56.806 ] 00:20:56.806 } 00:20:56.806 }, 00:20:56.806 { 00:20:56.806 "method": "nvmf_set_max_subsystems", 00:20:56.806 "params": { 00:20:56.806 "max_subsystems": 1024 00:20:56.806 } 00:20:56.806 }, 00:20:56.806 { 00:20:56.806 "method": "nvmf_set_crdt", 00:20:56.806 "params": { 00:20:56.806 "crdt1": 0, 00:20:56.806 "crdt2": 0, 00:20:56.806 "crdt3": 0 00:20:56.806 } 00:20:56.806 }, 00:20:56.806 { 00:20:56.806 "method": "nvmf_create_transport", 00:20:56.806 "params": { 00:20:56.806 "trtype": "TCP", 00:20:56.806 "max_queue_depth": 128, 00:20:56.806 "max_io_qpairs_per_ctrlr": 127, 00:20:56.806 "in_capsule_data_size": 4096, 00:20:56.806 "max_io_size": 131072, 00:20:56.806 "io_unit_size": 131072, 00:20:56.806 "max_aq_depth": 128, 00:20:56.806 "num_shared_buffers": 511, 00:20:56.806 "buf_cache_size": 4294967295, 00:20:56.806 "dif_insert_or_strip": false, 00:20:56.806 "zcopy": false, 00:20:56.806 "c2h_success": false, 00:20:56.806 "sock_priority": 0, 00:20:56.806 "abort_timeout_sec": 1, 00:20:56.806 "ack_timeout": 0, 00:20:56.806 "data_wr_pool_size": 0 00:20:56.806 } 00:20:56.806 }, 00:20:56.806 { 00:20:56.806 "method": "nvmf_create_subsystem", 00:20:56.806 "params": { 00:20:56.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.806 "allow_any_host": false, 00:20:56.806 "serial_number": "SPDK00000000000001", 00:20:56.806 "model_number": "SPDK bdev Controller", 00:20:56.806 "max_namespaces": 10, 00:20:56.806 "min_cntlid": 1, 00:20:56.806 "max_cntlid": 65519, 00:20:56.806 "ana_reporting": false 00:20:56.806 } 00:20:56.806 }, 00:20:56.806 { 00:20:56.806 "method": "nvmf_subsystem_add_host", 00:20:56.806 "params": { 00:20:56.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.806 "host": "nqn.2016-06.io.spdk:host1", 00:20:56.806 "psk": "key0" 00:20:56.806 } 00:20:56.806 }, 00:20:56.806 { 00:20:56.806 "method": "nvmf_subsystem_add_ns", 00:20:56.806 "params": { 00:20:56.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.806 "namespace": { 00:20:56.806 "nsid": 1, 00:20:56.806 "bdev_name": "malloc0", 00:20:56.806 "nguid": "EE29AE703C7C4F4581AA684517AB840B", 00:20:56.806 "uuid": "ee29ae70-3c7c-4f45-81aa-684517ab840b", 00:20:56.806 "no_auto_visible": false 00:20:56.806 } 00:20:56.806 } 00:20:56.806 }, 00:20:56.806 { 00:20:56.806 "method": "nvmf_subsystem_add_listener", 00:20:56.806 "params": { 00:20:56.806 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:56.806 "listen_address": { 00:20:56.806 "trtype": "TCP", 00:20:56.806 "adrfam": "IPv4", 00:20:56.806 "traddr": "10.0.0.2", 00:20:56.806 "trsvcid": "4420" 00:20:56.806 }, 00:20:56.806 "secure_channel": true 00:20:56.806 } 00:20:56.806 } 00:20:56.806 ] 00:20:56.806 } 00:20:56.806 ] 00:20:56.806 }' 00:20:56.806 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:57.066 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:57.066 "subsystems": [ 00:20:57.066 { 00:20:57.066 "subsystem": "keyring", 00:20:57.066 "config": [ 00:20:57.066 { 00:20:57.066 "method": "keyring_file_add_key", 00:20:57.066 "params": { 00:20:57.066 "name": "key0", 00:20:57.066 "path": "/tmp/tmp.zCSsQ53vVc" 00:20:57.066 } 00:20:57.066 } 00:20:57.066 ] 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "subsystem": "iobuf", 00:20:57.066 "config": [ 00:20:57.066 { 00:20:57.066 "method": "iobuf_set_options", 00:20:57.066 "params": { 00:20:57.066 "small_pool_count": 8192, 00:20:57.066 "large_pool_count": 1024, 00:20:57.066 "small_bufsize": 8192, 00:20:57.066 "large_bufsize": 135168, 00:20:57.066 "enable_numa": false 00:20:57.066 } 00:20:57.066 } 00:20:57.066 ] 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "subsystem": "sock", 00:20:57.066 "config": [ 00:20:57.066 { 00:20:57.066 "method": "sock_set_default_impl", 00:20:57.066 "params": { 00:20:57.066 "impl_name": "posix" 00:20:57.066 } 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "method": "sock_impl_set_options", 00:20:57.066 "params": { 00:20:57.066 "impl_name": "ssl", 00:20:57.066 "recv_buf_size": 4096, 00:20:57.066 "send_buf_size": 4096, 00:20:57.066 "enable_recv_pipe": true, 00:20:57.066 "enable_quickack": false, 00:20:57.066 "enable_placement_id": 0, 00:20:57.066 "enable_zerocopy_send_server": true, 00:20:57.066 "enable_zerocopy_send_client": false, 00:20:57.066 "zerocopy_threshold": 0, 00:20:57.066 "tls_version": 0, 00:20:57.066 "enable_ktls": false 00:20:57.066 } 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "method": "sock_impl_set_options", 00:20:57.066 "params": { 00:20:57.066 "impl_name": "posix", 00:20:57.066 "recv_buf_size": 2097152, 00:20:57.066 "send_buf_size": 2097152, 00:20:57.066 "enable_recv_pipe": true, 00:20:57.066 "enable_quickack": false, 00:20:57.066 "enable_placement_id": 0, 00:20:57.066 "enable_zerocopy_send_server": true, 00:20:57.066 "enable_zerocopy_send_client": false, 00:20:57.066 "zerocopy_threshold": 0, 00:20:57.066 "tls_version": 0, 00:20:57.066 "enable_ktls": false 00:20:57.066 } 00:20:57.066 } 00:20:57.066 ] 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "subsystem": "vmd", 00:20:57.066 "config": [] 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "subsystem": "accel", 00:20:57.066 "config": [ 00:20:57.066 { 00:20:57.066 "method": "accel_set_options", 00:20:57.066 "params": { 00:20:57.066 "small_cache_size": 128, 00:20:57.066 "large_cache_size": 16, 00:20:57.066 "task_count": 2048, 00:20:57.066 "sequence_count": 2048, 00:20:57.066 "buf_count": 2048 00:20:57.066 } 00:20:57.066 } 00:20:57.066 ] 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "subsystem": "bdev", 00:20:57.066 "config": [ 00:20:57.066 { 00:20:57.066 "method": "bdev_set_options", 00:20:57.066 "params": { 00:20:57.066 "bdev_io_pool_size": 65535, 00:20:57.066 "bdev_io_cache_size": 256, 00:20:57.066 "bdev_auto_examine": true, 00:20:57.066 "iobuf_small_cache_size": 128, 00:20:57.066 "iobuf_large_cache_size": 16 00:20:57.066 } 00:20:57.066 }, 00:20:57.066 { 00:20:57.066 "method": "bdev_raid_set_options", 00:20:57.066 "params": { 00:20:57.066 "process_window_size_kb": 1024, 00:20:57.066 "process_max_bandwidth_mb_sec": 0 00:20:57.066 } 00:20:57.066 }, 00:20:57.067 { 00:20:57.067 "method": "bdev_iscsi_set_options", 00:20:57.067 "params": { 00:20:57.067 "timeout_sec": 30 00:20:57.067 } 00:20:57.067 }, 00:20:57.067 { 00:20:57.067 "method": "bdev_nvme_set_options", 00:20:57.067 "params": { 00:20:57.067 "action_on_timeout": "none", 00:20:57.067 "timeout_us": 0, 00:20:57.067 "timeout_admin_us": 0, 00:20:57.067 "keep_alive_timeout_ms": 10000, 00:20:57.067 "arbitration_burst": 0, 00:20:57.067 "low_priority_weight": 0, 00:20:57.067 "medium_priority_weight": 0, 00:20:57.067 "high_priority_weight": 0, 00:20:57.067 "nvme_adminq_poll_period_us": 10000, 00:20:57.067 "nvme_ioq_poll_period_us": 0, 00:20:57.067 "io_queue_requests": 512, 00:20:57.067 "delay_cmd_submit": true, 00:20:57.067 "transport_retry_count": 4, 00:20:57.067 "bdev_retry_count": 3, 00:20:57.067 "transport_ack_timeout": 0, 00:20:57.067 "ctrlr_loss_timeout_sec": 0, 00:20:57.067 "reconnect_delay_sec": 0, 00:20:57.067 "fast_io_fail_timeout_sec": 0, 00:20:57.067 "disable_auto_failback": false, 00:20:57.067 "generate_uuids": false, 00:20:57.067 "transport_tos": 0, 00:20:57.067 "nvme_error_stat": false, 00:20:57.067 "rdma_srq_size": 0, 00:20:57.067 "io_path_stat": false, 00:20:57.067 "allow_accel_sequence": false, 00:20:57.067 "rdma_max_cq_size": 0, 00:20:57.067 "rdma_cm_event_timeout_ms": 0, 00:20:57.067 "dhchap_digests": [ 00:20:57.067 "sha256", 00:20:57.067 "sha384", 00:20:57.067 "sha512" 00:20:57.067 ], 00:20:57.067 "dhchap_dhgroups": [ 00:20:57.067 "null", 00:20:57.067 "ffdhe2048", 00:20:57.067 "ffdhe3072", 00:20:57.067 "ffdhe4096", 00:20:57.067 "ffdhe6144", 00:20:57.067 "ffdhe8192" 00:20:57.067 ] 00:20:57.067 } 00:20:57.067 }, 00:20:57.067 { 00:20:57.067 "method": "bdev_nvme_attach_controller", 00:20:57.067 "params": { 00:20:57.067 "name": "TLSTEST", 00:20:57.067 "trtype": "TCP", 00:20:57.067 "adrfam": "IPv4", 00:20:57.067 "traddr": "10.0.0.2", 00:20:57.067 "trsvcid": "4420", 00:20:57.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.067 "prchk_reftag": false, 00:20:57.067 "prchk_guard": false, 00:20:57.067 "ctrlr_loss_timeout_sec": 0, 00:20:57.067 "reconnect_delay_sec": 0, 00:20:57.067 "fast_io_fail_timeout_sec": 0, 00:20:57.067 "psk": "key0", 00:20:57.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:57.067 "hdgst": false, 00:20:57.067 "ddgst": false, 00:20:57.067 "multipath": "multipath" 00:20:57.067 } 00:20:57.067 }, 00:20:57.067 { 00:20:57.067 "method": "bdev_nvme_set_hotplug", 00:20:57.067 "params": { 00:20:57.067 "period_us": 100000, 00:20:57.067 "enable": false 00:20:57.067 } 00:20:57.067 }, 00:20:57.067 { 00:20:57.067 "method": "bdev_wait_for_examine" 00:20:57.067 } 00:20:57.067 ] 00:20:57.067 }, 00:20:57.067 { 00:20:57.067 "subsystem": "nbd", 00:20:57.067 "config": [] 00:20:57.067 } 00:20:57.067 ] 00:20:57.067 }' 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 953071 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 953071 ']' 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 953071 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953071 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953071' 00:20:57.067 killing process with pid 953071 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 953071 00:20:57.067 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.067 00:20:57.067 Latency(us) 00:20:57.067 [2024-12-05T12:25:19.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.067 [2024-12-05T12:25:19.635Z] =================================================================================================================== 00:20:57.067 [2024-12-05T12:25:19.635Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:57.067 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 953071 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 952708 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 952708 ']' 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 952708 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 952708 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 952708' 00:20:57.329 killing process with pid 952708 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 952708 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 952708 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.329 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:57.329 "subsystems": [ 00:20:57.329 { 00:20:57.329 "subsystem": "keyring", 00:20:57.329 "config": [ 00:20:57.329 { 00:20:57.329 "method": "keyring_file_add_key", 00:20:57.329 "params": { 00:20:57.329 "name": "key0", 00:20:57.329 "path": "/tmp/tmp.zCSsQ53vVc" 00:20:57.329 } 00:20:57.329 } 00:20:57.329 ] 00:20:57.329 }, 00:20:57.329 { 00:20:57.329 "subsystem": "iobuf", 00:20:57.329 "config": [ 00:20:57.329 { 00:20:57.329 "method": "iobuf_set_options", 00:20:57.329 "params": { 00:20:57.329 "small_pool_count": 8192, 00:20:57.329 "large_pool_count": 1024, 00:20:57.329 "small_bufsize": 8192, 00:20:57.329 "large_bufsize": 135168, 00:20:57.329 "enable_numa": false 00:20:57.329 } 00:20:57.329 } 00:20:57.329 ] 00:20:57.329 }, 00:20:57.329 { 00:20:57.329 "subsystem": "sock", 00:20:57.329 "config": [ 00:20:57.329 { 00:20:57.329 "method": "sock_set_default_impl", 00:20:57.329 "params": { 00:20:57.329 "impl_name": "posix" 00:20:57.329 } 00:20:57.329 }, 00:20:57.329 { 00:20:57.329 "method": "sock_impl_set_options", 00:20:57.329 "params": { 00:20:57.329 "impl_name": "ssl", 00:20:57.329 "recv_buf_size": 4096, 00:20:57.329 "send_buf_size": 4096, 00:20:57.329 "enable_recv_pipe": true, 00:20:57.329 "enable_quickack": false, 00:20:57.329 "enable_placement_id": 0, 00:20:57.329 "enable_zerocopy_send_server": true, 00:20:57.329 "enable_zerocopy_send_client": false, 00:20:57.329 "zerocopy_threshold": 0, 00:20:57.329 "tls_version": 0, 00:20:57.329 "enable_ktls": false 00:20:57.329 } 00:20:57.329 }, 00:20:57.329 { 00:20:57.329 "method": "sock_impl_set_options", 00:20:57.329 "params": { 00:20:57.329 "impl_name": "posix", 00:20:57.329 "recv_buf_size": 2097152, 00:20:57.329 "send_buf_size": 2097152, 00:20:57.329 "enable_recv_pipe": true, 00:20:57.329 "enable_quickack": false, 00:20:57.329 "enable_placement_id": 0, 00:20:57.329 "enable_zerocopy_send_server": true, 00:20:57.329 "enable_zerocopy_send_client": false, 00:20:57.329 "zerocopy_threshold": 0, 00:20:57.329 "tls_version": 0, 00:20:57.329 "enable_ktls": false 00:20:57.329 } 00:20:57.329 } 00:20:57.329 ] 00:20:57.329 }, 00:20:57.329 { 00:20:57.329 "subsystem": "vmd", 00:20:57.329 "config": [] 00:20:57.329 }, 00:20:57.329 { 00:20:57.329 "subsystem": "accel", 00:20:57.329 "config": [ 00:20:57.329 { 00:20:57.329 "method": "accel_set_options", 00:20:57.329 "params": { 00:20:57.329 "small_cache_size": 128, 00:20:57.329 "large_cache_size": 16, 00:20:57.329 "task_count": 2048, 00:20:57.329 "sequence_count": 2048, 00:20:57.329 "buf_count": 2048 00:20:57.329 } 00:20:57.329 } 00:20:57.329 ] 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "subsystem": "bdev", 00:20:57.330 "config": [ 00:20:57.330 { 00:20:57.330 "method": "bdev_set_options", 00:20:57.330 "params": { 00:20:57.330 "bdev_io_pool_size": 65535, 00:20:57.330 "bdev_io_cache_size": 256, 00:20:57.330 "bdev_auto_examine": true, 00:20:57.330 "iobuf_small_cache_size": 128, 00:20:57.330 "iobuf_large_cache_size": 16 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "bdev_raid_set_options", 00:20:57.330 "params": { 00:20:57.330 "process_window_size_kb": 1024, 00:20:57.330 "process_max_bandwidth_mb_sec": 0 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "bdev_iscsi_set_options", 00:20:57.330 "params": { 00:20:57.330 "timeout_sec": 30 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "bdev_nvme_set_options", 00:20:57.330 "params": { 00:20:57.330 "action_on_timeout": "none", 00:20:57.330 "timeout_us": 0, 00:20:57.330 "timeout_admin_us": 0, 00:20:57.330 "keep_alive_timeout_ms": 10000, 00:20:57.330 "arbitration_burst": 0, 00:20:57.330 "low_priority_weight": 0, 00:20:57.330 "medium_priority_weight": 0, 00:20:57.330 "high_priority_weight": 0, 00:20:57.330 "nvme_adminq_poll_period_us": 10000, 00:20:57.330 "nvme_ioq_poll_period_us": 0, 00:20:57.330 "io_queue_requests": 0, 00:20:57.330 "delay_cmd_submit": true, 00:20:57.330 "transport_retry_count": 4, 00:20:57.330 "bdev_retry_count": 3, 00:20:57.330 "transport_ack_timeout": 0, 00:20:57.330 "ctrlr_loss_timeout_sec": 0, 00:20:57.330 "reconnect_delay_sec": 0, 00:20:57.330 "fast_io_fail_timeout_sec": 0, 00:20:57.330 "disable_auto_failback": false, 00:20:57.330 "generate_uuids": false, 00:20:57.330 "transport_tos": 0, 00:20:57.330 "nvme_error_stat": false, 00:20:57.330 "rdma_srq_size": 0, 00:20:57.330 "io_path_stat": false, 00:20:57.330 "allow_accel_sequence": false, 00:20:57.330 "rdma_max_cq_size": 0, 00:20:57.330 "rdma_cm_event_timeout_ms": 0, 00:20:57.330 "dhchap_digests": [ 00:20:57.330 "sha256", 00:20:57.330 "sha384", 00:20:57.330 "sha512" 00:20:57.330 ], 00:20:57.330 "dhchap_dhgroups": [ 00:20:57.330 "null", 00:20:57.330 "ffdhe2048", 00:20:57.330 "ffdhe3072", 00:20:57.330 "ffdhe4096", 00:20:57.330 "ffdhe6144", 00:20:57.330 "ffdhe8192" 00:20:57.330 ] 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "bdev_nvme_set_hotplug", 00:20:57.330 "params": { 00:20:57.330 "period_us": 100000, 00:20:57.330 "enable": false 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "bdev_malloc_create", 00:20:57.330 "params": { 00:20:57.330 "name": "malloc0", 00:20:57.330 "num_blocks": 8192, 00:20:57.330 "block_size": 4096, 00:20:57.330 "physical_block_size": 4096, 00:20:57.330 "uuid": "ee29ae70-3c7c-4f45-81aa-684517ab840b", 00:20:57.330 "optimal_io_boundary": 0, 00:20:57.330 "md_size": 0, 00:20:57.330 "dif_type": 0, 00:20:57.330 "dif_is_head_of_md": false, 00:20:57.330 "dif_pi_format": 0 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "bdev_wait_for_examine" 00:20:57.330 } 00:20:57.330 ] 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "subsystem": "nbd", 00:20:57.330 "config": [] 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "subsystem": "scheduler", 00:20:57.330 "config": [ 00:20:57.330 { 00:20:57.330 "method": "framework_set_scheduler", 00:20:57.330 "params": { 00:20:57.330 "name": "static" 00:20:57.330 } 00:20:57.330 } 00:20:57.330 ] 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "subsystem": "nvmf", 00:20:57.330 "config": [ 00:20:57.330 { 00:20:57.330 "method": "nvmf_set_config", 00:20:57.330 "params": { 00:20:57.330 "discovery_filter": "match_any", 00:20:57.330 "admin_cmd_passthru": { 00:20:57.330 "identify_ctrlr": false 00:20:57.330 }, 00:20:57.330 "dhchap_digests": [ 00:20:57.330 "sha256", 00:20:57.330 "sha384", 00:20:57.330 "sha512" 00:20:57.330 ], 00:20:57.330 "dhchap_dhgroups": [ 00:20:57.330 "null", 00:20:57.330 "ffdhe2048", 00:20:57.330 "ffdhe3072", 00:20:57.330 "ffdhe4096", 00:20:57.330 "ffdhe6144", 00:20:57.330 "ffdhe8192" 00:20:57.330 ] 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "nvmf_set_max_subsystems", 00:20:57.330 "params": { 00:20:57.330 "max_subsystems": 1024 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "nvmf_set_crdt", 00:20:57.330 "params": { 00:20:57.330 "crdt1": 0, 00:20:57.330 "crdt2": 0, 00:20:57.330 "crdt3": 0 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "nvmf_create_transport", 00:20:57.330 "params": { 00:20:57.330 "trtype": "TCP", 00:20:57.330 "max_queue_depth": 128, 00:20:57.330 "max_io_qpairs_per_ctrlr": 127, 00:20:57.330 "in_capsule_data_size": 4096, 00:20:57.330 "max_io_size": 131072, 00:20:57.330 "io_unit_size": 131072, 00:20:57.330 "max_aq_depth": 128, 00:20:57.330 "num_shared_buffers": 511, 00:20:57.330 "buf_cache_size": 4294967295, 00:20:57.330 "dif_insert_or_strip": false, 00:20:57.330 "zcopy": false, 00:20:57.330 "c2h_success": false, 00:20:57.330 "sock_priority": 0, 00:20:57.330 "abort_timeout_sec": 1, 00:20:57.330 "ack_timeout": 0, 00:20:57.330 "data_wr_pool_size": 0 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "nvmf_create_subsystem", 00:20:57.330 "params": { 00:20:57.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.330 "allow_any_host": false, 00:20:57.330 "serial_number": "SPDK00000000000001", 00:20:57.330 "model_number": "SPDK bdev Controller", 00:20:57.330 "max_namespaces": 10, 00:20:57.330 "min_cntlid": 1, 00:20:57.330 "max_cntlid": 65519, 00:20:57.330 "ana_reporting": false 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "nvmf_subsystem_add_host", 00:20:57.330 "params": { 00:20:57.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.330 "host": "nqn.2016-06.io.spdk:host1", 00:20:57.330 "psk": "key0" 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "nvmf_subsystem_add_ns", 00:20:57.330 "params": { 00:20:57.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.330 "namespace": { 00:20:57.330 "nsid": 1, 00:20:57.330 "bdev_name": "malloc0", 00:20:57.330 "nguid": "EE29AE703C7C4F4581AA684517AB840B", 00:20:57.330 "uuid": "ee29ae70-3c7c-4f45-81aa-684517ab840b", 00:20:57.330 "no_auto_visible": false 00:20:57.330 } 00:20:57.330 } 00:20:57.330 }, 00:20:57.330 { 00:20:57.330 "method": "nvmf_subsystem_add_listener", 00:20:57.330 "params": { 00:20:57.330 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:57.330 "listen_address": { 00:20:57.330 "trtype": "TCP", 00:20:57.330 "adrfam": "IPv4", 00:20:57.330 "traddr": "10.0.0.2", 00:20:57.330 "trsvcid": "4420" 00:20:57.330 }, 00:20:57.330 "secure_channel": true 00:20:57.330 } 00:20:57.330 } 00:20:57.330 ] 00:20:57.330 } 00:20:57.330 ] 00:20:57.330 }' 00:20:57.330 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=953417 00:20:57.330 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 953417 00:20:57.330 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:57.330 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 953417 ']' 00:20:57.330 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.330 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:57.330 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.331 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:57.331 13:25:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.331 [2024-12-05 13:25:19.890456] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:57.331 [2024-12-05 13:25:19.890505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.592 [2024-12-05 13:25:19.985389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.592 [2024-12-05 13:25:20.013852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.592 [2024-12-05 13:25:20.013891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.592 [2024-12-05 13:25:20.013896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.592 [2024-12-05 13:25:20.013901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.592 [2024-12-05 13:25:20.013905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.592 [2024-12-05 13:25:20.014320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.852 [2024-12-05 13:25:20.208459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.853 [2024-12-05 13:25:20.240486] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.853 [2024-12-05 13:25:20.240690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.113 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:58.113 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:58.113 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:58.113 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:58.113 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=953601 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 953601 /var/tmp/bdevperf.sock 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 953601 ']' 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.374 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:58.374 "subsystems": [ 00:20:58.374 { 00:20:58.374 "subsystem": "keyring", 00:20:58.374 "config": [ 00:20:58.374 { 00:20:58.374 "method": "keyring_file_add_key", 00:20:58.374 "params": { 00:20:58.374 "name": "key0", 00:20:58.374 "path": "/tmp/tmp.zCSsQ53vVc" 00:20:58.374 } 00:20:58.374 } 00:20:58.374 ] 00:20:58.374 }, 00:20:58.374 { 00:20:58.374 "subsystem": "iobuf", 00:20:58.374 "config": [ 00:20:58.374 { 00:20:58.374 "method": "iobuf_set_options", 00:20:58.374 "params": { 00:20:58.374 "small_pool_count": 8192, 00:20:58.374 "large_pool_count": 1024, 00:20:58.374 "small_bufsize": 8192, 00:20:58.374 "large_bufsize": 135168, 00:20:58.374 "enable_numa": false 00:20:58.374 } 00:20:58.374 } 00:20:58.374 ] 00:20:58.374 }, 00:20:58.374 { 00:20:58.374 "subsystem": "sock", 00:20:58.374 "config": [ 00:20:58.374 { 00:20:58.374 "method": "sock_set_default_impl", 00:20:58.374 "params": { 00:20:58.374 "impl_name": "posix" 00:20:58.374 } 00:20:58.374 }, 00:20:58.374 { 00:20:58.374 "method": "sock_impl_set_options", 00:20:58.374 "params": { 00:20:58.374 "impl_name": "ssl", 00:20:58.374 "recv_buf_size": 4096, 00:20:58.374 "send_buf_size": 4096, 00:20:58.374 "enable_recv_pipe": true, 00:20:58.374 "enable_quickack": false, 00:20:58.374 "enable_placement_id": 0, 00:20:58.375 "enable_zerocopy_send_server": true, 00:20:58.375 "enable_zerocopy_send_client": false, 00:20:58.375 "zerocopy_threshold": 0, 00:20:58.375 "tls_version": 0, 00:20:58.375 "enable_ktls": false 00:20:58.375 } 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "method": "sock_impl_set_options", 00:20:58.375 "params": { 00:20:58.375 "impl_name": "posix", 00:20:58.375 "recv_buf_size": 2097152, 00:20:58.375 "send_buf_size": 2097152, 00:20:58.375 "enable_recv_pipe": true, 00:20:58.375 "enable_quickack": false, 00:20:58.375 "enable_placement_id": 0, 00:20:58.375 "enable_zerocopy_send_server": true, 00:20:58.375 "enable_zerocopy_send_client": false, 00:20:58.375 "zerocopy_threshold": 0, 00:20:58.375 "tls_version": 0, 00:20:58.375 "enable_ktls": false 00:20:58.375 } 00:20:58.375 } 00:20:58.375 ] 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "subsystem": "vmd", 00:20:58.375 "config": [] 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "subsystem": "accel", 00:20:58.375 "config": [ 00:20:58.375 { 00:20:58.375 "method": "accel_set_options", 00:20:58.375 "params": { 00:20:58.375 "small_cache_size": 128, 00:20:58.375 "large_cache_size": 16, 00:20:58.375 "task_count": 2048, 00:20:58.375 "sequence_count": 2048, 00:20:58.375 "buf_count": 2048 00:20:58.375 } 00:20:58.375 } 00:20:58.375 ] 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "subsystem": "bdev", 00:20:58.375 "config": [ 00:20:58.375 { 00:20:58.375 "method": "bdev_set_options", 00:20:58.375 "params": { 00:20:58.375 "bdev_io_pool_size": 65535, 00:20:58.375 "bdev_io_cache_size": 256, 00:20:58.375 "bdev_auto_examine": true, 00:20:58.375 "iobuf_small_cache_size": 128, 00:20:58.375 "iobuf_large_cache_size": 16 00:20:58.375 } 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "method": "bdev_raid_set_options", 00:20:58.375 "params": { 00:20:58.375 "process_window_size_kb": 1024, 00:20:58.375 "process_max_bandwidth_mb_sec": 0 00:20:58.375 } 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "method": "bdev_iscsi_set_options", 00:20:58.375 "params": { 00:20:58.375 "timeout_sec": 30 00:20:58.375 } 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "method": "bdev_nvme_set_options", 00:20:58.375 "params": { 00:20:58.375 "action_on_timeout": "none", 00:20:58.375 "timeout_us": 0, 00:20:58.375 "timeout_admin_us": 0, 00:20:58.375 "keep_alive_timeout_ms": 10000, 00:20:58.375 "arbitration_burst": 0, 00:20:58.375 "low_priority_weight": 0, 00:20:58.375 "medium_priority_weight": 0, 00:20:58.375 "high_priority_weight": 0, 00:20:58.375 "nvme_adminq_poll_period_us": 10000, 00:20:58.375 "nvme_ioq_poll_period_us": 0, 00:20:58.375 "io_queue_requests": 512, 00:20:58.375 "delay_cmd_submit": true, 00:20:58.375 "transport_retry_count": 4, 00:20:58.375 "bdev_retry_count": 3, 00:20:58.375 "transport_ack_timeout": 0, 00:20:58.375 "ctrlr_loss_timeout_sec": 0, 00:20:58.375 "reconnect_delay_sec": 0, 00:20:58.375 "fast_io_fail_timeout_sec": 0, 00:20:58.375 "disable_auto_failback": false, 00:20:58.375 "generate_uuids": false, 00:20:58.375 "transport_tos": 0, 00:20:58.375 "nvme_error_stat": false, 00:20:58.375 "rdma_srq_size": 0, 00:20:58.375 "io_path_stat": false, 00:20:58.375 "allow_accel_sequence": false, 00:20:58.375 "rdma_max_cq_size": 0, 00:20:58.375 "rdma_cm_event_timeout_ms": 0, 00:20:58.375 "dhchap_digests": [ 00:20:58.375 "sha256", 00:20:58.375 "sha384", 00:20:58.375 "sha512" 00:20:58.375 ], 00:20:58.375 "dhchap_dhgroups": [ 00:20:58.375 "null", 00:20:58.375 "ffdhe2048", 00:20:58.375 "ffdhe3072", 00:20:58.375 "ffdhe4096", 00:20:58.375 "ffdhe6144", 00:20:58.375 "ffdhe8192" 00:20:58.375 ] 00:20:58.375 } 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "method": "bdev_nvme_attach_controller", 00:20:58.375 "params": { 00:20:58.375 "name": "TLSTEST", 00:20:58.375 "trtype": "TCP", 00:20:58.375 "adrfam": "IPv4", 00:20:58.375 "traddr": "10.0.0.2", 00:20:58.375 "trsvcid": "4420", 00:20:58.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:58.375 "prchk_reftag": false, 00:20:58.375 "prchk_guard": false, 00:20:58.375 "ctrlr_loss_timeout_sec": 0, 00:20:58.375 "reconnect_delay_sec": 0, 00:20:58.375 "fast_io_fail_timeout_sec": 0, 00:20:58.375 "psk": "key0", 00:20:58.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:58.375 "hdgst": false, 00:20:58.375 "ddgst": false, 00:20:58.375 "multipath": "multipath" 00:20:58.375 } 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "method": "bdev_nvme_set_hotplug", 00:20:58.375 "params": { 00:20:58.375 "period_us": 100000, 00:20:58.375 "enable": false 00:20:58.375 } 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "method": "bdev_wait_for_examine" 00:20:58.375 } 00:20:58.375 ] 00:20:58.375 }, 00:20:58.375 { 00:20:58.375 "subsystem": "nbd", 00:20:58.375 "config": [] 00:20:58.375 } 00:20:58.375 ] 00:20:58.375 }' 00:20:58.375 [2024-12-05 13:25:20.776095] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:20:58.375 [2024-12-05 13:25:20.776148] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953601 ] 00:20:58.376 [2024-12-05 13:25:20.839839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.376 [2024-12-05 13:25:20.868996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:58.636 [2024-12-05 13:25:21.004003] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.208 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:59.208 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:59.208 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:59.208 Running I/O for 10 seconds... 00:21:01.089 5724.00 IOPS, 22.36 MiB/s [2024-12-05T12:25:25.065Z] 5948.50 IOPS, 23.24 MiB/s [2024-12-05T12:25:26.014Z] 5984.67 IOPS, 23.38 MiB/s [2024-12-05T12:25:26.953Z] 6123.25 IOPS, 23.92 MiB/s [2024-12-05T12:25:27.895Z] 6192.40 IOPS, 24.19 MiB/s [2024-12-05T12:25:28.836Z] 6155.17 IOPS, 24.04 MiB/s [2024-12-05T12:25:29.778Z] 6156.57 IOPS, 24.05 MiB/s [2024-12-05T12:25:30.716Z] 6138.50 IOPS, 23.98 MiB/s [2024-12-05T12:25:32.101Z] 6058.11 IOPS, 23.66 MiB/s [2024-12-05T12:25:32.101Z] 5944.80 IOPS, 23.22 MiB/s 00:21:09.533 Latency(us) 00:21:09.533 [2024-12-05T12:25:32.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.533 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:09.533 Verification LBA range: start 0x0 length 0x2000 00:21:09.533 TLSTESTn1 : 10.01 5948.99 23.24 0.00 0.00 21484.15 5652.48 24029.87 00:21:09.533 [2024-12-05T12:25:32.101Z] =================================================================================================================== 00:21:09.533 [2024-12-05T12:25:32.101Z] Total : 5948.99 23.24 0.00 0.00 21484.15 5652.48 24029.87 00:21:09.533 { 00:21:09.533 "results": [ 00:21:09.533 { 00:21:09.533 "job": "TLSTESTn1", 00:21:09.533 "core_mask": "0x4", 00:21:09.533 "workload": "verify", 00:21:09.533 "status": "finished", 00:21:09.533 "verify_range": { 00:21:09.533 "start": 0, 00:21:09.533 "length": 8192 00:21:09.533 }, 00:21:09.533 "queue_depth": 128, 00:21:09.533 "io_size": 4096, 00:21:09.533 "runtime": 10.014145, 00:21:09.533 "iops": 5948.985160490486, 00:21:09.533 "mibps": 23.23822328316596, 00:21:09.533 "io_failed": 0, 00:21:09.533 "io_timeout": 0, 00:21:09.533 "avg_latency_us": 21484.14864829176, 00:21:09.533 "min_latency_us": 5652.48, 00:21:09.533 "max_latency_us": 24029.866666666665 00:21:09.533 } 00:21:09.533 ], 00:21:09.533 "core_count": 1 00:21:09.533 } 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 953601 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 953601 ']' 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 953601 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953601 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953601' 00:21:09.533 killing process with pid 953601 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 953601 00:21:09.533 Received shutdown signal, test time was about 10.000000 seconds 00:21:09.533 00:21:09.533 Latency(us) 00:21:09.533 [2024-12-05T12:25:32.101Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.533 [2024-12-05T12:25:32.101Z] =================================================================================================================== 00:21:09.533 [2024-12-05T12:25:32.101Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 953601 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 953417 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 953417 ']' 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 953417 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953417 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953417' 00:21:09.533 killing process with pid 953417 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 953417 00:21:09.533 13:25:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 953417 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=955790 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 955790 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 955790 ']' 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:09.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:09.533 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:09.794 [2024-12-05 13:25:32.118829] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:09.794 [2024-12-05 13:25:32.118899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:09.794 [2024-12-05 13:25:32.204095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.794 [2024-12-05 13:25:32.239915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.794 [2024-12-05 13:25:32.239952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.794 [2024-12-05 13:25:32.239962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.794 [2024-12-05 13:25:32.239970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.794 [2024-12-05 13:25:32.239977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.794 [2024-12-05 13:25:32.240557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.364 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:10.364 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:10.364 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.364 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:10.364 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.658 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.658 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zCSsQ53vVc 00:21:10.658 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zCSsQ53vVc 00:21:10.658 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:10.658 [2024-12-05 13:25:33.070161] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.658 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:10.918 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:10.918 [2024-12-05 13:25:33.402999] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:10.918 [2024-12-05 13:25:33.403242] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.918 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:11.178 malloc0 00:21:11.178 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:11.438 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:21:11.439 13:25:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=956158 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 956158 /var/tmp/bdevperf.sock 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 956158 ']' 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.699 [2024-12-05 13:25:34.127156] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:11.699 [2024-12-05 13:25:34.127211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956158 ] 00:21:11.699 [2024-12-05 13:25:34.216728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.699 [2024-12-05 13:25:34.246546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.640 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.640 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:12.640 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:21:12.640 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:12.899 [2024-12-05 13:25:35.211270] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.899 nvme0n1 00:21:12.899 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:12.899 Running I/O for 1 seconds... 00:21:14.098 5497.00 IOPS, 21.47 MiB/s 00:21:14.098 Latency(us) 00:21:14.098 [2024-12-05T12:25:36.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.098 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:14.098 Verification LBA range: start 0x0 length 0x2000 00:21:14.098 nvme0n1 : 1.09 5182.60 20.24 0.00 0.00 24132.33 4560.21 84322.99 00:21:14.098 [2024-12-05T12:25:36.666Z] =================================================================================================================== 00:21:14.098 [2024-12-05T12:25:36.666Z] Total : 5182.60 20.24 0.00 0.00 24132.33 4560.21 84322.99 00:21:14.098 { 00:21:14.098 "results": [ 00:21:14.098 { 00:21:14.098 "job": "nvme0n1", 00:21:14.098 "core_mask": "0x2", 00:21:14.098 "workload": "verify", 00:21:14.098 "status": "finished", 00:21:14.098 "verify_range": { 00:21:14.098 "start": 0, 00:21:14.098 "length": 8192 00:21:14.098 }, 00:21:14.098 "queue_depth": 128, 00:21:14.098 "io_size": 4096, 00:21:14.098 "runtime": 1.085362, 00:21:14.098 "iops": 5182.602670813977, 00:21:14.098 "mibps": 20.244541682867098, 00:21:14.098 "io_failed": 0, 00:21:14.098 "io_timeout": 0, 00:21:14.098 "avg_latency_us": 24132.333416296296, 00:21:14.098 "min_latency_us": 4560.213333333333, 00:21:14.098 "max_latency_us": 84322.98666666666 00:21:14.098 } 00:21:14.098 ], 00:21:14.098 "core_count": 1 00:21:14.098 } 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 956158 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 956158 ']' 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 956158 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956158 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956158' 00:21:14.098 killing process with pid 956158 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 956158 00:21:14.098 Received shutdown signal, test time was about 1.000000 seconds 00:21:14.098 00:21:14.098 Latency(us) 00:21:14.098 [2024-12-05T12:25:36.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.098 [2024-12-05T12:25:36.666Z] =================================================================================================================== 00:21:14.098 [2024-12-05T12:25:36.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.098 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 956158 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 955790 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 955790 ']' 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 955790 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 955790 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 955790' 00:21:14.358 killing process with pid 955790 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 955790 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 955790 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=956829 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 956829 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 956829 ']' 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.358 13:25:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.359 [2024-12-05 13:25:36.891020] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:14.359 [2024-12-05 13:25:36.891076] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.618 [2024-12-05 13:25:36.972470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.618 [2024-12-05 13:25:37.006586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.618 [2024-12-05 13:25:37.006621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.618 [2024-12-05 13:25:37.006629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:14.618 [2024-12-05 13:25:37.006636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:14.618 [2024-12-05 13:25:37.006642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.618 [2024-12-05 13:25:37.007201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.618 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.618 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.619 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:14.619 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:14.619 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.619 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:14.619 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:14.619 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.619 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.619 [2024-12-05 13:25:37.147230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:14.619 malloc0 00:21:14.619 [2024-12-05 13:25:37.173931] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:14.619 [2024-12-05 13:25:37.174163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=956857 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 956857 /var/tmp/bdevperf.sock 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 956857 ']' 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.879 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.879 [2024-12-05 13:25:37.254326] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:14.879 [2024-12-05 13:25:37.254374] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid956857 ] 00:21:14.879 [2024-12-05 13:25:37.343267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.879 [2024-12-05 13:25:37.372873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.820 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.820 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.820 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zCSsQ53vVc 00:21:15.820 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:15.820 [2024-12-05 13:25:38.357580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.081 nvme0n1 00:21:16.081 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:16.081 Running I/O for 1 seconds... 00:21:17.023 3482.00 IOPS, 13.60 MiB/s 00:21:17.023 Latency(us) 00:21:17.023 [2024-12-05T12:25:39.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.023 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:17.023 Verification LBA range: start 0x0 length 0x2000 00:21:17.023 nvme0n1 : 1.03 3512.11 13.72 0.00 0.00 36171.27 4587.52 81701.55 00:21:17.023 [2024-12-05T12:25:39.591Z] =================================================================================================================== 00:21:17.023 [2024-12-05T12:25:39.591Z] Total : 3512.11 13.72 0.00 0.00 36171.27 4587.52 81701.55 00:21:17.023 { 00:21:17.023 "results": [ 00:21:17.023 { 00:21:17.023 "job": "nvme0n1", 00:21:17.023 "core_mask": "0x2", 00:21:17.023 "workload": "verify", 00:21:17.023 "status": "finished", 00:21:17.023 "verify_range": { 00:21:17.023 "start": 0, 00:21:17.023 "length": 8192 00:21:17.023 }, 00:21:17.023 "queue_depth": 128, 00:21:17.023 "io_size": 4096, 00:21:17.023 "runtime": 1.027871, 00:21:17.023 "iops": 3512.113874211842, 00:21:17.023 "mibps": 13.719194821140007, 00:21:17.023 "io_failed": 0, 00:21:17.023 "io_timeout": 0, 00:21:17.023 "avg_latency_us": 36171.27298245614, 00:21:17.023 "min_latency_us": 4587.52, 00:21:17.023 "max_latency_us": 81701.54666666666 00:21:17.023 } 00:21:17.023 ], 00:21:17.023 "core_count": 1 00:21:17.023 } 00:21:17.284 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:17.284 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.284 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.284 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.284 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:17.284 "subsystems": [ 00:21:17.284 { 00:21:17.284 "subsystem": "keyring", 00:21:17.284 "config": [ 00:21:17.284 { 00:21:17.284 "method": "keyring_file_add_key", 00:21:17.284 "params": { 00:21:17.284 "name": "key0", 00:21:17.284 "path": "/tmp/tmp.zCSsQ53vVc" 00:21:17.284 } 00:21:17.284 } 00:21:17.284 ] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "iobuf", 00:21:17.284 "config": [ 00:21:17.284 { 00:21:17.284 "method": "iobuf_set_options", 00:21:17.284 "params": { 00:21:17.284 "small_pool_count": 8192, 00:21:17.284 "large_pool_count": 1024, 00:21:17.284 "small_bufsize": 8192, 00:21:17.284 "large_bufsize": 135168, 00:21:17.284 "enable_numa": false 00:21:17.284 } 00:21:17.284 } 00:21:17.284 ] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "sock", 00:21:17.284 "config": [ 00:21:17.284 { 00:21:17.284 "method": "sock_set_default_impl", 00:21:17.284 "params": { 00:21:17.284 "impl_name": "posix" 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "sock_impl_set_options", 00:21:17.284 "params": { 00:21:17.284 "impl_name": "ssl", 00:21:17.284 "recv_buf_size": 4096, 00:21:17.284 "send_buf_size": 4096, 00:21:17.284 "enable_recv_pipe": true, 00:21:17.284 "enable_quickack": false, 00:21:17.284 "enable_placement_id": 0, 00:21:17.284 "enable_zerocopy_send_server": true, 00:21:17.284 "enable_zerocopy_send_client": false, 00:21:17.284 "zerocopy_threshold": 0, 00:21:17.284 "tls_version": 0, 00:21:17.284 "enable_ktls": false 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "sock_impl_set_options", 00:21:17.284 "params": { 00:21:17.284 "impl_name": "posix", 00:21:17.284 "recv_buf_size": 2097152, 00:21:17.284 "send_buf_size": 2097152, 00:21:17.284 "enable_recv_pipe": true, 00:21:17.284 "enable_quickack": false, 00:21:17.284 "enable_placement_id": 0, 00:21:17.284 "enable_zerocopy_send_server": true, 00:21:17.284 "enable_zerocopy_send_client": false, 00:21:17.284 "zerocopy_threshold": 0, 00:21:17.284 "tls_version": 0, 00:21:17.284 "enable_ktls": false 00:21:17.284 } 00:21:17.284 } 00:21:17.284 ] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "vmd", 00:21:17.284 "config": [] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "accel", 00:21:17.284 "config": [ 00:21:17.284 { 00:21:17.284 "method": "accel_set_options", 00:21:17.284 "params": { 00:21:17.284 "small_cache_size": 128, 00:21:17.284 "large_cache_size": 16, 00:21:17.284 "task_count": 2048, 00:21:17.284 "sequence_count": 2048, 00:21:17.284 "buf_count": 2048 00:21:17.284 } 00:21:17.284 } 00:21:17.284 ] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "bdev", 00:21:17.284 "config": [ 00:21:17.284 { 00:21:17.284 "method": "bdev_set_options", 00:21:17.284 "params": { 00:21:17.284 "bdev_io_pool_size": 65535, 00:21:17.284 "bdev_io_cache_size": 256, 00:21:17.284 "bdev_auto_examine": true, 00:21:17.284 "iobuf_small_cache_size": 128, 00:21:17.284 "iobuf_large_cache_size": 16 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "bdev_raid_set_options", 00:21:17.284 "params": { 00:21:17.284 "process_window_size_kb": 1024, 00:21:17.284 "process_max_bandwidth_mb_sec": 0 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "bdev_iscsi_set_options", 00:21:17.284 "params": { 00:21:17.284 "timeout_sec": 30 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "bdev_nvme_set_options", 00:21:17.284 "params": { 00:21:17.284 "action_on_timeout": "none", 00:21:17.284 "timeout_us": 0, 00:21:17.284 "timeout_admin_us": 0, 00:21:17.284 "keep_alive_timeout_ms": 10000, 00:21:17.284 "arbitration_burst": 0, 00:21:17.284 "low_priority_weight": 0, 00:21:17.284 "medium_priority_weight": 0, 00:21:17.284 "high_priority_weight": 0, 00:21:17.284 "nvme_adminq_poll_period_us": 10000, 00:21:17.284 "nvme_ioq_poll_period_us": 0, 00:21:17.284 "io_queue_requests": 0, 00:21:17.284 "delay_cmd_submit": true, 00:21:17.284 "transport_retry_count": 4, 00:21:17.284 "bdev_retry_count": 3, 00:21:17.284 "transport_ack_timeout": 0, 00:21:17.284 "ctrlr_loss_timeout_sec": 0, 00:21:17.284 "reconnect_delay_sec": 0, 00:21:17.284 "fast_io_fail_timeout_sec": 0, 00:21:17.284 "disable_auto_failback": false, 00:21:17.284 "generate_uuids": false, 00:21:17.284 "transport_tos": 0, 00:21:17.284 "nvme_error_stat": false, 00:21:17.284 "rdma_srq_size": 0, 00:21:17.284 "io_path_stat": false, 00:21:17.284 "allow_accel_sequence": false, 00:21:17.284 "rdma_max_cq_size": 0, 00:21:17.284 "rdma_cm_event_timeout_ms": 0, 00:21:17.284 "dhchap_digests": [ 00:21:17.284 "sha256", 00:21:17.284 "sha384", 00:21:17.284 "sha512" 00:21:17.284 ], 00:21:17.284 "dhchap_dhgroups": [ 00:21:17.284 "null", 00:21:17.284 "ffdhe2048", 00:21:17.284 "ffdhe3072", 00:21:17.284 "ffdhe4096", 00:21:17.284 "ffdhe6144", 00:21:17.284 "ffdhe8192" 00:21:17.284 ] 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "bdev_nvme_set_hotplug", 00:21:17.284 "params": { 00:21:17.284 "period_us": 100000, 00:21:17.284 "enable": false 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "bdev_malloc_create", 00:21:17.284 "params": { 00:21:17.284 "name": "malloc0", 00:21:17.284 "num_blocks": 8192, 00:21:17.284 "block_size": 4096, 00:21:17.284 "physical_block_size": 4096, 00:21:17.284 "uuid": "9f511863-c6e3-4813-bb89-42d39ddaae54", 00:21:17.284 "optimal_io_boundary": 0, 00:21:17.284 "md_size": 0, 00:21:17.284 "dif_type": 0, 00:21:17.284 "dif_is_head_of_md": false, 00:21:17.284 "dif_pi_format": 0 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "bdev_wait_for_examine" 00:21:17.284 } 00:21:17.284 ] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "nbd", 00:21:17.284 "config": [] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "scheduler", 00:21:17.284 "config": [ 00:21:17.284 { 00:21:17.284 "method": "framework_set_scheduler", 00:21:17.284 "params": { 00:21:17.284 "name": "static" 00:21:17.284 } 00:21:17.284 } 00:21:17.284 ] 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "subsystem": "nvmf", 00:21:17.284 "config": [ 00:21:17.284 { 00:21:17.284 "method": "nvmf_set_config", 00:21:17.284 "params": { 00:21:17.284 "discovery_filter": "match_any", 00:21:17.284 "admin_cmd_passthru": { 00:21:17.284 "identify_ctrlr": false 00:21:17.284 }, 00:21:17.284 "dhchap_digests": [ 00:21:17.284 "sha256", 00:21:17.284 "sha384", 00:21:17.284 "sha512" 00:21:17.284 ], 00:21:17.284 "dhchap_dhgroups": [ 00:21:17.284 "null", 00:21:17.284 "ffdhe2048", 00:21:17.284 "ffdhe3072", 00:21:17.284 "ffdhe4096", 00:21:17.284 "ffdhe6144", 00:21:17.284 "ffdhe8192" 00:21:17.284 ] 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "nvmf_set_max_subsystems", 00:21:17.284 "params": { 00:21:17.284 "max_subsystems": 1024 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "nvmf_set_crdt", 00:21:17.284 "params": { 00:21:17.284 "crdt1": 0, 00:21:17.284 "crdt2": 0, 00:21:17.284 "crdt3": 0 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "nvmf_create_transport", 00:21:17.284 "params": { 00:21:17.284 "trtype": "TCP", 00:21:17.284 "max_queue_depth": 128, 00:21:17.284 "max_io_qpairs_per_ctrlr": 127, 00:21:17.284 "in_capsule_data_size": 4096, 00:21:17.284 "max_io_size": 131072, 00:21:17.284 "io_unit_size": 131072, 00:21:17.284 "max_aq_depth": 128, 00:21:17.284 "num_shared_buffers": 511, 00:21:17.284 "buf_cache_size": 4294967295, 00:21:17.284 "dif_insert_or_strip": false, 00:21:17.284 "zcopy": false, 00:21:17.284 "c2h_success": false, 00:21:17.284 "sock_priority": 0, 00:21:17.284 "abort_timeout_sec": 1, 00:21:17.284 "ack_timeout": 0, 00:21:17.284 "data_wr_pool_size": 0 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.284 "method": "nvmf_create_subsystem", 00:21:17.284 "params": { 00:21:17.284 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.284 "allow_any_host": false, 00:21:17.284 "serial_number": "00000000000000000000", 00:21:17.284 "model_number": "SPDK bdev Controller", 00:21:17.284 "max_namespaces": 32, 00:21:17.284 "min_cntlid": 1, 00:21:17.284 "max_cntlid": 65519, 00:21:17.284 "ana_reporting": false 00:21:17.284 } 00:21:17.284 }, 00:21:17.284 { 00:21:17.285 "method": "nvmf_subsystem_add_host", 00:21:17.285 "params": { 00:21:17.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.285 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.285 "psk": "key0" 00:21:17.285 } 00:21:17.285 }, 00:21:17.285 { 00:21:17.285 "method": "nvmf_subsystem_add_ns", 00:21:17.285 "params": { 00:21:17.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.285 "namespace": { 00:21:17.285 "nsid": 1, 00:21:17.285 "bdev_name": "malloc0", 00:21:17.285 "nguid": "9F511863C6E34813BB8942D39DDAAE54", 00:21:17.285 "uuid": "9f511863-c6e3-4813-bb89-42d39ddaae54", 00:21:17.285 "no_auto_visible": false 00:21:17.285 } 00:21:17.285 } 00:21:17.285 }, 00:21:17.285 { 00:21:17.285 "method": "nvmf_subsystem_add_listener", 00:21:17.285 "params": { 00:21:17.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.285 "listen_address": { 00:21:17.285 "trtype": "TCP", 00:21:17.285 "adrfam": "IPv4", 00:21:17.285 "traddr": "10.0.0.2", 00:21:17.285 "trsvcid": "4420" 00:21:17.285 }, 00:21:17.285 "secure_channel": false, 00:21:17.285 "sock_impl": "ssl" 00:21:17.285 } 00:21:17.285 } 00:21:17.285 ] 00:21:17.285 } 00:21:17.285 ] 00:21:17.285 }' 00:21:17.285 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:17.546 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:17.546 "subsystems": [ 00:21:17.546 { 00:21:17.546 "subsystem": "keyring", 00:21:17.546 "config": [ 00:21:17.546 { 00:21:17.546 "method": "keyring_file_add_key", 00:21:17.546 "params": { 00:21:17.546 "name": "key0", 00:21:17.546 "path": "/tmp/tmp.zCSsQ53vVc" 00:21:17.546 } 00:21:17.546 } 00:21:17.546 ] 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "subsystem": "iobuf", 00:21:17.546 "config": [ 00:21:17.546 { 00:21:17.546 "method": "iobuf_set_options", 00:21:17.546 "params": { 00:21:17.546 "small_pool_count": 8192, 00:21:17.546 "large_pool_count": 1024, 00:21:17.546 "small_bufsize": 8192, 00:21:17.546 "large_bufsize": 135168, 00:21:17.546 "enable_numa": false 00:21:17.546 } 00:21:17.546 } 00:21:17.546 ] 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "subsystem": "sock", 00:21:17.546 "config": [ 00:21:17.546 { 00:21:17.546 "method": "sock_set_default_impl", 00:21:17.546 "params": { 00:21:17.546 "impl_name": "posix" 00:21:17.546 } 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "method": "sock_impl_set_options", 00:21:17.546 "params": { 00:21:17.546 "impl_name": "ssl", 00:21:17.546 "recv_buf_size": 4096, 00:21:17.546 "send_buf_size": 4096, 00:21:17.546 "enable_recv_pipe": true, 00:21:17.546 "enable_quickack": false, 00:21:17.546 "enable_placement_id": 0, 00:21:17.546 "enable_zerocopy_send_server": true, 00:21:17.546 "enable_zerocopy_send_client": false, 00:21:17.546 "zerocopy_threshold": 0, 00:21:17.546 "tls_version": 0, 00:21:17.546 "enable_ktls": false 00:21:17.546 } 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "method": "sock_impl_set_options", 00:21:17.546 "params": { 00:21:17.546 "impl_name": "posix", 00:21:17.546 "recv_buf_size": 2097152, 00:21:17.546 "send_buf_size": 2097152, 00:21:17.546 "enable_recv_pipe": true, 00:21:17.546 "enable_quickack": false, 00:21:17.546 "enable_placement_id": 0, 00:21:17.546 "enable_zerocopy_send_server": true, 00:21:17.546 "enable_zerocopy_send_client": false, 00:21:17.546 "zerocopy_threshold": 0, 00:21:17.546 "tls_version": 0, 00:21:17.546 "enable_ktls": false 00:21:17.546 } 00:21:17.546 } 00:21:17.546 ] 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "subsystem": "vmd", 00:21:17.546 "config": [] 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "subsystem": "accel", 00:21:17.546 "config": [ 00:21:17.546 { 00:21:17.546 "method": "accel_set_options", 00:21:17.546 "params": { 00:21:17.546 "small_cache_size": 128, 00:21:17.546 "large_cache_size": 16, 00:21:17.546 "task_count": 2048, 00:21:17.546 "sequence_count": 2048, 00:21:17.546 "buf_count": 2048 00:21:17.546 } 00:21:17.546 } 00:21:17.546 ] 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "subsystem": "bdev", 00:21:17.546 "config": [ 00:21:17.546 { 00:21:17.546 "method": "bdev_set_options", 00:21:17.546 "params": { 00:21:17.546 "bdev_io_pool_size": 65535, 00:21:17.546 "bdev_io_cache_size": 256, 00:21:17.546 "bdev_auto_examine": true, 00:21:17.546 "iobuf_small_cache_size": 128, 00:21:17.546 "iobuf_large_cache_size": 16 00:21:17.546 } 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "method": "bdev_raid_set_options", 00:21:17.546 "params": { 00:21:17.546 "process_window_size_kb": 1024, 00:21:17.546 "process_max_bandwidth_mb_sec": 0 00:21:17.546 } 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "method": "bdev_iscsi_set_options", 00:21:17.546 "params": { 00:21:17.546 "timeout_sec": 30 00:21:17.546 } 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "method": "bdev_nvme_set_options", 00:21:17.546 "params": { 00:21:17.546 "action_on_timeout": "none", 00:21:17.546 "timeout_us": 0, 00:21:17.546 "timeout_admin_us": 0, 00:21:17.546 "keep_alive_timeout_ms": 10000, 00:21:17.546 "arbitration_burst": 0, 00:21:17.546 "low_priority_weight": 0, 00:21:17.546 "medium_priority_weight": 0, 00:21:17.546 "high_priority_weight": 0, 00:21:17.546 "nvme_adminq_poll_period_us": 10000, 00:21:17.546 "nvme_ioq_poll_period_us": 0, 00:21:17.546 "io_queue_requests": 512, 00:21:17.546 "delay_cmd_submit": true, 00:21:17.546 "transport_retry_count": 4, 00:21:17.546 "bdev_retry_count": 3, 00:21:17.546 "transport_ack_timeout": 0, 00:21:17.546 "ctrlr_loss_timeout_sec": 0, 00:21:17.546 "reconnect_delay_sec": 0, 00:21:17.546 "fast_io_fail_timeout_sec": 0, 00:21:17.546 "disable_auto_failback": false, 00:21:17.546 "generate_uuids": false, 00:21:17.546 "transport_tos": 0, 00:21:17.546 "nvme_error_stat": false, 00:21:17.546 "rdma_srq_size": 0, 00:21:17.546 "io_path_stat": false, 00:21:17.546 "allow_accel_sequence": false, 00:21:17.546 "rdma_max_cq_size": 0, 00:21:17.546 "rdma_cm_event_timeout_ms": 0, 00:21:17.546 "dhchap_digests": [ 00:21:17.546 "sha256", 00:21:17.546 "sha384", 00:21:17.546 "sha512" 00:21:17.546 ], 00:21:17.546 "dhchap_dhgroups": [ 00:21:17.546 "null", 00:21:17.546 "ffdhe2048", 00:21:17.546 "ffdhe3072", 00:21:17.546 "ffdhe4096", 00:21:17.546 "ffdhe6144", 00:21:17.546 "ffdhe8192" 00:21:17.546 ] 00:21:17.546 } 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "method": "bdev_nvme_attach_controller", 00:21:17.546 "params": { 00:21:17.546 "name": "nvme0", 00:21:17.546 "trtype": "TCP", 00:21:17.546 "adrfam": "IPv4", 00:21:17.546 "traddr": "10.0.0.2", 00:21:17.546 "trsvcid": "4420", 00:21:17.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.546 "prchk_reftag": false, 00:21:17.546 "prchk_guard": false, 00:21:17.546 "ctrlr_loss_timeout_sec": 0, 00:21:17.546 "reconnect_delay_sec": 0, 00:21:17.546 "fast_io_fail_timeout_sec": 0, 00:21:17.546 "psk": "key0", 00:21:17.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.546 "hdgst": false, 00:21:17.546 "ddgst": false, 00:21:17.546 "multipath": "multipath" 00:21:17.546 } 00:21:17.546 }, 00:21:17.546 { 00:21:17.546 "method": "bdev_nvme_set_hotplug", 00:21:17.547 "params": { 00:21:17.547 "period_us": 100000, 00:21:17.547 "enable": false 00:21:17.547 } 00:21:17.547 }, 00:21:17.547 { 00:21:17.547 "method": "bdev_enable_histogram", 00:21:17.547 "params": { 00:21:17.547 "name": "nvme0n1", 00:21:17.547 "enable": true 00:21:17.547 } 00:21:17.547 }, 00:21:17.547 { 00:21:17.547 "method": "bdev_wait_for_examine" 00:21:17.547 } 00:21:17.547 ] 00:21:17.547 }, 00:21:17.547 { 00:21:17.547 "subsystem": "nbd", 00:21:17.547 "config": [] 00:21:17.547 } 00:21:17.547 ] 00:21:17.547 }' 00:21:17.547 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 956857 00:21:17.547 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 956857 ']' 00:21:17.547 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 956857 00:21:17.547 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.547 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.547 13:25:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956857 00:21:17.547 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.547 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.547 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956857' 00:21:17.547 killing process with pid 956857 00:21:17.547 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 956857 00:21:17.547 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.547 00:21:17.547 Latency(us) 00:21:17.547 [2024-12-05T12:25:40.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.547 [2024-12-05T12:25:40.115Z] =================================================================================================================== 00:21:17.547 [2024-12-05T12:25:40.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.547 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 956857 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 956829 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 956829 ']' 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 956829 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956829 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956829' 00:21:17.808 killing process with pid 956829 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 956829 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 956829 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.808 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:17.808 "subsystems": [ 00:21:17.808 { 00:21:17.808 "subsystem": "keyring", 00:21:17.808 "config": [ 00:21:17.808 { 00:21:17.808 "method": "keyring_file_add_key", 00:21:17.808 "params": { 00:21:17.808 "name": "key0", 00:21:17.808 "path": "/tmp/tmp.zCSsQ53vVc" 00:21:17.808 } 00:21:17.808 } 00:21:17.808 ] 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "subsystem": "iobuf", 00:21:17.808 "config": [ 00:21:17.808 { 00:21:17.808 "method": "iobuf_set_options", 00:21:17.808 "params": { 00:21:17.808 "small_pool_count": 8192, 00:21:17.808 "large_pool_count": 1024, 00:21:17.808 "small_bufsize": 8192, 00:21:17.808 "large_bufsize": 135168, 00:21:17.808 "enable_numa": false 00:21:17.808 } 00:21:17.808 } 00:21:17.808 ] 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "subsystem": "sock", 00:21:17.808 "config": [ 00:21:17.808 { 00:21:17.808 "method": "sock_set_default_impl", 00:21:17.808 "params": { 00:21:17.808 "impl_name": "posix" 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "sock_impl_set_options", 00:21:17.808 "params": { 00:21:17.808 "impl_name": "ssl", 00:21:17.808 "recv_buf_size": 4096, 00:21:17.808 "send_buf_size": 4096, 00:21:17.808 "enable_recv_pipe": true, 00:21:17.808 "enable_quickack": false, 00:21:17.808 "enable_placement_id": 0, 00:21:17.808 "enable_zerocopy_send_server": true, 00:21:17.808 "enable_zerocopy_send_client": false, 00:21:17.808 "zerocopy_threshold": 0, 00:21:17.808 "tls_version": 0, 00:21:17.808 "enable_ktls": false 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "sock_impl_set_options", 00:21:17.808 "params": { 00:21:17.808 "impl_name": "posix", 00:21:17.808 "recv_buf_size": 2097152, 00:21:17.808 "send_buf_size": 2097152, 00:21:17.808 "enable_recv_pipe": true, 00:21:17.808 "enable_quickack": false, 00:21:17.808 "enable_placement_id": 0, 00:21:17.808 "enable_zerocopy_send_server": true, 00:21:17.808 "enable_zerocopy_send_client": false, 00:21:17.808 "zerocopy_threshold": 0, 00:21:17.808 "tls_version": 0, 00:21:17.808 "enable_ktls": false 00:21:17.808 } 00:21:17.808 } 00:21:17.808 ] 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "subsystem": "vmd", 00:21:17.808 "config": [] 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "subsystem": "accel", 00:21:17.808 "config": [ 00:21:17.808 { 00:21:17.808 "method": "accel_set_options", 00:21:17.808 "params": { 00:21:17.808 "small_cache_size": 128, 00:21:17.808 "large_cache_size": 16, 00:21:17.808 "task_count": 2048, 00:21:17.808 "sequence_count": 2048, 00:21:17.808 "buf_count": 2048 00:21:17.808 } 00:21:17.808 } 00:21:17.808 ] 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "subsystem": "bdev", 00:21:17.808 "config": [ 00:21:17.808 { 00:21:17.808 "method": "bdev_set_options", 00:21:17.808 "params": { 00:21:17.808 "bdev_io_pool_size": 65535, 00:21:17.808 "bdev_io_cache_size": 256, 00:21:17.808 "bdev_auto_examine": true, 00:21:17.808 "iobuf_small_cache_size": 128, 00:21:17.808 "iobuf_large_cache_size": 16 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "bdev_raid_set_options", 00:21:17.808 "params": { 00:21:17.808 "process_window_size_kb": 1024, 00:21:17.808 "process_max_bandwidth_mb_sec": 0 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "bdev_iscsi_set_options", 00:21:17.808 "params": { 00:21:17.808 "timeout_sec": 30 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "bdev_nvme_set_options", 00:21:17.808 "params": { 00:21:17.808 "action_on_timeout": "none", 00:21:17.808 "timeout_us": 0, 00:21:17.808 "timeout_admin_us": 0, 00:21:17.808 "keep_alive_timeout_ms": 10000, 00:21:17.808 "arbitration_burst": 0, 00:21:17.808 "low_priority_weight": 0, 00:21:17.808 "medium_priority_weight": 0, 00:21:17.808 "high_priority_weight": 0, 00:21:17.808 "nvme_adminq_poll_period_us": 10000, 00:21:17.808 "nvme_ioq_poll_period_us": 0, 00:21:17.808 "io_queue_requests": 0, 00:21:17.808 "delay_cmd_submit": true, 00:21:17.808 "transport_retry_count": 4, 00:21:17.808 "bdev_retry_count": 3, 00:21:17.808 "transport_ack_timeout": 0, 00:21:17.808 "ctrlr_loss_timeout_sec": 0, 00:21:17.808 "reconnect_delay_sec": 0, 00:21:17.808 "fast_io_fail_timeout_sec": 0, 00:21:17.808 "disable_auto_failback": false, 00:21:17.808 "generate_uuids": false, 00:21:17.808 "transport_tos": 0, 00:21:17.808 "nvme_error_stat": false, 00:21:17.808 "rdma_srq_size": 0, 00:21:17.808 "io_path_stat": false, 00:21:17.808 "allow_accel_sequence": false, 00:21:17.808 "rdma_max_cq_size": 0, 00:21:17.808 "rdma_cm_event_timeout_ms": 0, 00:21:17.808 "dhchap_digests": [ 00:21:17.808 "sha256", 00:21:17.808 "sha384", 00:21:17.808 "sha512" 00:21:17.808 ], 00:21:17.808 "dhchap_dhgroups": [ 00:21:17.808 "null", 00:21:17.808 "ffdhe2048", 00:21:17.808 "ffdhe3072", 00:21:17.808 "ffdhe4096", 00:21:17.808 "ffdhe6144", 00:21:17.808 "ffdhe8192" 00:21:17.808 ] 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "bdev_nvme_set_hotplug", 00:21:17.808 "params": { 00:21:17.808 "period_us": 100000, 00:21:17.808 "enable": false 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "bdev_malloc_create", 00:21:17.808 "params": { 00:21:17.808 "name": "malloc0", 00:21:17.808 "num_blocks": 8192, 00:21:17.808 "block_size": 4096, 00:21:17.808 "physical_block_size": 4096, 00:21:17.808 "uuid": "9f511863-c6e3-4813-bb89-42d39ddaae54", 00:21:17.808 "optimal_io_boundary": 0, 00:21:17.808 "md_size": 0, 00:21:17.808 "dif_type": 0, 00:21:17.808 "dif_is_head_of_md": false, 00:21:17.808 "dif_pi_format": 0 00:21:17.808 } 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "method": "bdev_wait_for_examine" 00:21:17.808 } 00:21:17.808 ] 00:21:17.808 }, 00:21:17.808 { 00:21:17.808 "subsystem": "nbd", 00:21:17.808 "config": [] 00:21:17.808 }, 00:21:17.809 { 00:21:17.809 "subsystem": "scheduler", 00:21:17.809 "config": [ 00:21:17.809 { 00:21:17.809 "method": "framework_set_scheduler", 00:21:17.809 "params": { 00:21:17.809 "name": "static" 00:21:17.809 } 00:21:17.809 } 00:21:17.809 ] 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "subsystem": "nvmf", 00:21:17.809 "config": [ 00:21:17.809 { 00:21:17.809 "method": "nvmf_set_config", 00:21:17.809 "params": { 00:21:17.809 "discovery_filter": "match_any", 00:21:17.809 "admin_cmd_passthru": { 00:21:17.809 "identify_ctrlr": false 00:21:17.809 }, 00:21:17.809 "dhchap_digests": [ 00:21:17.809 "sha256", 00:21:17.809 "sha384", 00:21:17.809 "sha512" 00:21:17.809 ], 00:21:17.809 "dhchap_dhgroups": [ 00:21:17.809 "null", 00:21:17.809 "ffdhe2048", 00:21:17.809 "ffdhe3072", 00:21:17.809 "ffdhe4096", 00:21:17.809 "ffdhe6144", 00:21:17.809 "ffdhe8192" 00:21:17.809 ] 00:21:17.809 } 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "method": "nvmf_set_max_subsystems", 00:21:17.809 "params": { 00:21:17.809 "max_subsystems": 1024 00:21:17.809 } 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "method": "nvmf_set_crdt", 00:21:17.809 "params": { 00:21:17.809 "crdt1": 0, 00:21:17.809 "crdt2": 0, 00:21:17.809 "crdt3": 0 00:21:17.809 } 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "method": "nvmf_create_transport", 00:21:17.809 "params": { 00:21:17.809 "trtype": "TCP", 00:21:17.809 "max_queue_depth": 128, 00:21:17.809 "max_io_qpairs_per_ctrlr": 127, 00:21:17.809 "in_capsule_data_size": 4096, 00:21:17.809 "max_io_size": 131072, 00:21:17.809 "io_unit_size": 131072, 00:21:17.809 "max_aq_depth": 128, 00:21:17.809 "num_shared_buffers": 511, 00:21:17.809 "buf_cache_size": 4294967295, 00:21:17.809 "dif_insert_or_strip": false, 00:21:17.809 "zcopy": false, 00:21:17.809 "c2h_success": false, 00:21:17.809 "sock_priority": 0, 00:21:17.809 "abort_timeout_sec": 1, 00:21:17.809 "ack_timeout": 0, 00:21:17.809 "data_wr_pool_size": 0 00:21:17.809 } 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "method": "nvmf_create_subsystem", 00:21:17.809 "params": { 00:21:17.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.809 "allow_any_host": false, 00:21:17.809 "serial_number": "00000000000000000000", 00:21:17.809 "model_number": "SPDK bdev Controller", 00:21:17.809 "max_namespaces": 32, 00:21:17.809 "min_cntlid": 1, 00:21:17.809 "max_cntlid": 65519, 00:21:17.809 "ana_reporting": false 00:21:17.809 } 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "method": "nvmf_subsystem_add_host", 00:21:17.809 "params": { 00:21:17.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.809 "host": "nqn.2016-06.io.spdk:host1", 00:21:17.809 "psk": "key0" 00:21:17.809 } 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "method": "nvmf_subsystem_add_ns", 00:21:17.809 "params": { 00:21:17.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.809 "namespace": { 00:21:17.809 "nsid": 1, 00:21:17.809 "bdev_name": "malloc0", 00:21:17.809 "nguid": "9F511863C6E34813BB8942D39DDAAE54", 00:21:17.809 "uuid": "9f511863-c6e3-4813-bb89-42d39ddaae54", 00:21:17.809 "no_auto_visible": false 00:21:17.809 } 00:21:17.809 } 00:21:17.809 }, 00:21:17.809 { 00:21:17.809 "method": "nvmf_subsystem_add_listener", 00:21:17.809 "params": { 00:21:17.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.809 "listen_address": { 00:21:17.809 "trtype": "TCP", 00:21:17.809 "adrfam": "IPv4", 00:21:17.809 "traddr": "10.0.0.2", 00:21:17.809 "trsvcid": "4420" 00:21:17.809 }, 00:21:17.809 "secure_channel": false, 00:21:17.809 "sock_impl": "ssl" 00:21:17.809 } 00:21:17.809 } 00:21:17.809 ] 00:21:17.809 } 00:21:17.809 ] 00:21:17.809 }' 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=957544 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 957544 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 957544 ']' 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.809 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.809 [2024-12-05 13:25:40.328648] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:17.809 [2024-12-05 13:25:40.328703] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.069 [2024-12-05 13:25:40.410520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.069 [2024-12-05 13:25:40.444622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.069 [2024-12-05 13:25:40.444656] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.069 [2024-12-05 13:25:40.444664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.069 [2024-12-05 13:25:40.444671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.069 [2024-12-05 13:25:40.444676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.069 [2024-12-05 13:25:40.445266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.329 [2024-12-05 13:25:40.645326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.329 [2024-12-05 13:25:40.677352] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.329 [2024-12-05 13:25:40.677589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=957574 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 957574 /var/tmp/bdevperf.sock 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 957574 ']' 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.590 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:18.590 "subsystems": [ 00:21:18.590 { 00:21:18.590 "subsystem": "keyring", 00:21:18.590 "config": [ 00:21:18.590 { 00:21:18.590 "method": "keyring_file_add_key", 00:21:18.590 "params": { 00:21:18.590 "name": "key0", 00:21:18.590 "path": "/tmp/tmp.zCSsQ53vVc" 00:21:18.590 } 00:21:18.590 } 00:21:18.590 ] 00:21:18.590 }, 00:21:18.590 { 00:21:18.590 "subsystem": "iobuf", 00:21:18.590 "config": [ 00:21:18.590 { 00:21:18.590 "method": "iobuf_set_options", 00:21:18.590 "params": { 00:21:18.590 "small_pool_count": 8192, 00:21:18.590 "large_pool_count": 1024, 00:21:18.590 "small_bufsize": 8192, 00:21:18.590 "large_bufsize": 135168, 00:21:18.590 "enable_numa": false 00:21:18.590 } 00:21:18.590 } 00:21:18.590 ] 00:21:18.590 }, 00:21:18.590 { 00:21:18.590 "subsystem": "sock", 00:21:18.590 "config": [ 00:21:18.590 { 00:21:18.590 "method": "sock_set_default_impl", 00:21:18.590 "params": { 00:21:18.590 "impl_name": "posix" 00:21:18.590 } 00:21:18.590 }, 00:21:18.590 { 00:21:18.590 "method": "sock_impl_set_options", 00:21:18.590 "params": { 00:21:18.590 "impl_name": "ssl", 00:21:18.590 "recv_buf_size": 4096, 00:21:18.590 "send_buf_size": 4096, 00:21:18.590 "enable_recv_pipe": true, 00:21:18.590 "enable_quickack": false, 00:21:18.590 "enable_placement_id": 0, 00:21:18.590 "enable_zerocopy_send_server": true, 00:21:18.590 "enable_zerocopy_send_client": false, 00:21:18.590 "zerocopy_threshold": 0, 00:21:18.590 "tls_version": 0, 00:21:18.590 "enable_ktls": false 00:21:18.590 } 00:21:18.590 }, 00:21:18.590 { 00:21:18.590 "method": "sock_impl_set_options", 00:21:18.590 "params": { 00:21:18.590 "impl_name": "posix", 00:21:18.590 "recv_buf_size": 2097152, 00:21:18.590 "send_buf_size": 2097152, 00:21:18.590 "enable_recv_pipe": true, 00:21:18.590 "enable_quickack": false, 00:21:18.590 "enable_placement_id": 0, 00:21:18.590 "enable_zerocopy_send_server": true, 00:21:18.590 "enable_zerocopy_send_client": false, 00:21:18.590 "zerocopy_threshold": 0, 00:21:18.591 "tls_version": 0, 00:21:18.591 "enable_ktls": false 00:21:18.591 } 00:21:18.591 } 00:21:18.591 ] 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "subsystem": "vmd", 00:21:18.591 "config": [] 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "subsystem": "accel", 00:21:18.591 "config": [ 00:21:18.591 { 00:21:18.591 "method": "accel_set_options", 00:21:18.591 "params": { 00:21:18.591 "small_cache_size": 128, 00:21:18.591 "large_cache_size": 16, 00:21:18.591 "task_count": 2048, 00:21:18.591 "sequence_count": 2048, 00:21:18.591 "buf_count": 2048 00:21:18.591 } 00:21:18.591 } 00:21:18.591 ] 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "subsystem": "bdev", 00:21:18.591 "config": [ 00:21:18.591 { 00:21:18.591 "method": "bdev_set_options", 00:21:18.591 "params": { 00:21:18.591 "bdev_io_pool_size": 65535, 00:21:18.591 "bdev_io_cache_size": 256, 00:21:18.591 "bdev_auto_examine": true, 00:21:18.591 "iobuf_small_cache_size": 128, 00:21:18.591 "iobuf_large_cache_size": 16 00:21:18.591 } 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "method": "bdev_raid_set_options", 00:21:18.591 "params": { 00:21:18.591 "process_window_size_kb": 1024, 00:21:18.591 "process_max_bandwidth_mb_sec": 0 00:21:18.591 } 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "method": "bdev_iscsi_set_options", 00:21:18.591 "params": { 00:21:18.591 "timeout_sec": 30 00:21:18.591 } 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "method": "bdev_nvme_set_options", 00:21:18.591 "params": { 00:21:18.591 "action_on_timeout": "none", 00:21:18.591 "timeout_us": 0, 00:21:18.591 "timeout_admin_us": 0, 00:21:18.591 "keep_alive_timeout_ms": 10000, 00:21:18.591 "arbitration_burst": 0, 00:21:18.591 "low_priority_weight": 0, 00:21:18.591 "medium_priority_weight": 0, 00:21:18.591 "high_priority_weight": 0, 00:21:18.591 "nvme_adminq_poll_period_us": 10000, 00:21:18.591 "nvme_ioq_poll_period_us": 0, 00:21:18.591 "io_queue_requests": 512, 00:21:18.591 "delay_cmd_submit": true, 00:21:18.591 "transport_retry_count": 4, 00:21:18.591 "bdev_retry_count": 3, 00:21:18.591 "transport_ack_timeout": 0, 00:21:18.591 "ctrlr_loss_timeout_sec": 0, 00:21:18.591 "reconnect_delay_sec": 0, 00:21:18.591 "fast_io_fail_timeout_sec": 0, 00:21:18.591 "disable_auto_failback": false, 00:21:18.591 "generate_uuids": false, 00:21:18.591 "transport_tos": 0, 00:21:18.591 "nvme_error_stat": false, 00:21:18.591 "rdma_srq_size": 0, 00:21:18.591 "io_path_stat": false, 00:21:18.591 "allow_accel_sequence": false, 00:21:18.591 "rdma_max_cq_size": 0, 00:21:18.591 "rdma_cm_event_timeout_ms": 0, 00:21:18.591 "dhchap_digests": [ 00:21:18.591 "sha256", 00:21:18.591 "sha384", 00:21:18.591 "sha512" 00:21:18.591 ], 00:21:18.591 "dhchap_dhgroups": [ 00:21:18.591 "null", 00:21:18.591 "ffdhe2048", 00:21:18.591 "ffdhe3072", 00:21:18.591 "ffdhe4096", 00:21:18.591 "ffdhe6144", 00:21:18.591 "ffdhe8192" 00:21:18.591 ] 00:21:18.591 } 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "method": "bdev_nvme_attach_controller", 00:21:18.591 "params": { 00:21:18.591 "name": "nvme0", 00:21:18.591 "trtype": "TCP", 00:21:18.591 "adrfam": "IPv4", 00:21:18.591 "traddr": "10.0.0.2", 00:21:18.591 "trsvcid": "4420", 00:21:18.591 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.591 "prchk_reftag": false, 00:21:18.591 "prchk_guard": false, 00:21:18.591 "ctrlr_loss_timeout_sec": 0, 00:21:18.591 "reconnect_delay_sec": 0, 00:21:18.591 "fast_io_fail_timeout_sec": 0, 00:21:18.591 "psk": "key0", 00:21:18.591 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.591 "hdgst": false, 00:21:18.591 "ddgst": false, 00:21:18.591 "multipath": "multipath" 00:21:18.591 } 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "method": "bdev_nvme_set_hotplug", 00:21:18.591 "params": { 00:21:18.591 "period_us": 100000, 00:21:18.591 "enable": false 00:21:18.591 } 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "method": "bdev_enable_histogram", 00:21:18.591 "params": { 00:21:18.591 "name": "nvme0n1", 00:21:18.591 "enable": true 00:21:18.591 } 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "method": "bdev_wait_for_examine" 00:21:18.591 } 00:21:18.591 ] 00:21:18.591 }, 00:21:18.591 { 00:21:18.591 "subsystem": "nbd", 00:21:18.591 "config": [] 00:21:18.591 } 00:21:18.591 ] 00:21:18.591 }' 00:21:18.852 [2024-12-05 13:25:41.211114] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:18.852 [2024-12-05 13:25:41.211167] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid957574 ] 00:21:18.852 [2024-12-05 13:25:41.301676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.852 [2024-12-05 13:25:41.331715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.113 [2024-12-05 13:25:41.467899] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.685 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.685 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.685 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:19.685 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:19.685 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.685 13:25:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:19.685 Running I/O for 1 seconds... 00:21:21.070 4485.00 IOPS, 17.52 MiB/s 00:21:21.070 Latency(us) 00:21:21.070 [2024-12-05T12:25:43.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.070 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:21.070 Verification LBA range: start 0x0 length 0x2000 00:21:21.070 nvme0n1 : 1.02 4516.89 17.64 0.00 0.00 28080.89 4560.21 62914.56 00:21:21.070 [2024-12-05T12:25:43.638Z] =================================================================================================================== 00:21:21.070 [2024-12-05T12:25:43.638Z] Total : 4516.89 17.64 0.00 0.00 28080.89 4560.21 62914.56 00:21:21.070 { 00:21:21.070 "results": [ 00:21:21.070 { 00:21:21.070 "job": "nvme0n1", 00:21:21.070 "core_mask": "0x2", 00:21:21.070 "workload": "verify", 00:21:21.070 "status": "finished", 00:21:21.070 "verify_range": { 00:21:21.070 "start": 0, 00:21:21.070 "length": 8192 00:21:21.070 }, 00:21:21.070 "queue_depth": 128, 00:21:21.070 "io_size": 4096, 00:21:21.070 "runtime": 1.021279, 00:21:21.070 "iops": 4516.885199832759, 00:21:21.070 "mibps": 17.644082811846715, 00:21:21.070 "io_failed": 0, 00:21:21.070 "io_timeout": 0, 00:21:21.070 "avg_latency_us": 28080.887324228628, 00:21:21.070 "min_latency_us": 4560.213333333333, 00:21:21.070 "max_latency_us": 62914.56 00:21:21.070 } 00:21:21.070 ], 00:21:21.070 "core_count": 1 00:21:21.070 } 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:21.070 nvmf_trace.0 00:21:21.070 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 957574 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 957574 ']' 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 957574 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957574 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957574' 00:21:21.071 killing process with pid 957574 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 957574 00:21:21.071 Received shutdown signal, test time was about 1.000000 seconds 00:21:21.071 00:21:21.071 Latency(us) 00:21:21.071 [2024-12-05T12:25:43.639Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.071 [2024-12-05T12:25:43.639Z] =================================================================================================================== 00:21:21.071 [2024-12-05T12:25:43.639Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 957574 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:21.071 rmmod nvme_tcp 00:21:21.071 rmmod nvme_fabrics 00:21:21.071 rmmod nvme_keyring 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 957544 ']' 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 957544 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 957544 ']' 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 957544 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.071 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957544 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957544' 00:21:21.332 killing process with pid 957544 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 957544 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 957544 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.332 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ZF5td0tAl1 /tmp/tmp.4Xq2t1xmGo /tmp/tmp.zCSsQ53vVc 00:21:23.878 00:21:23.878 real 1m24.019s 00:21:23.878 user 2m9.349s 00:21:23.878 sys 0m27.257s 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.878 ************************************ 00:21:23.878 END TEST nvmf_tls 00:21:23.878 ************************************ 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:23.878 ************************************ 00:21:23.878 START TEST nvmf_fips 00:21:23.878 ************************************ 00:21:23.878 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:23.878 * Looking for test storage... 00:21:23.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:23.878 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:23.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.879 --rc genhtml_branch_coverage=1 00:21:23.879 --rc genhtml_function_coverage=1 00:21:23.879 --rc genhtml_legend=1 00:21:23.879 --rc geninfo_all_blocks=1 00:21:23.879 --rc geninfo_unexecuted_blocks=1 00:21:23.879 00:21:23.879 ' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:23.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.879 --rc genhtml_branch_coverage=1 00:21:23.879 --rc genhtml_function_coverage=1 00:21:23.879 --rc genhtml_legend=1 00:21:23.879 --rc geninfo_all_blocks=1 00:21:23.879 --rc geninfo_unexecuted_blocks=1 00:21:23.879 00:21:23.879 ' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:23.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.879 --rc genhtml_branch_coverage=1 00:21:23.879 --rc genhtml_function_coverage=1 00:21:23.879 --rc genhtml_legend=1 00:21:23.879 --rc geninfo_all_blocks=1 00:21:23.879 --rc geninfo_unexecuted_blocks=1 00:21:23.879 00:21:23.879 ' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:23.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:23.879 --rc genhtml_branch_coverage=1 00:21:23.879 --rc genhtml_function_coverage=1 00:21:23.879 --rc genhtml_legend=1 00:21:23.879 --rc geninfo_all_blocks=1 00:21:23.879 --rc geninfo_unexecuted_blocks=1 00:21:23.879 00:21:23.879 ' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:23.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:23.879 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:21:23.880 Error setting digest 00:21:23.880 40C225481C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:23.880 40C225481C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:23.880 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:32.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:32.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:32.035 Found net devices under 0000:31:00.0: cvl_0_0 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:32.035 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:32.035 Found net devices under 0000:31:00.1: cvl_0_1 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:32.036 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:32.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:32.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:21:32.297 00:21:32.297 --- 10.0.0.2 ping statistics --- 00:21:32.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.297 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:32.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:32.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:21:32.297 00:21:32.297 --- 10.0.0.1 ping statistics --- 00:21:32.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:32.297 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:32.297 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=962949 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 962949 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 962949 ']' 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.559 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:32.559 [2024-12-05 13:25:54.952416] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:32.559 [2024-12-05 13:25:54.952490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:32.559 [2024-12-05 13:25:55.059201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.559 [2024-12-05 13:25:55.109196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:32.559 [2024-12-05 13:25:55.109249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:32.559 [2024-12-05 13:25:55.109259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:32.559 [2024-12-05 13:25:55.109266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:32.559 [2024-12-05 13:25:55.109273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:32.559 [2024-12-05 13:25:55.110117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.6rv 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.6rv 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.6rv 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.6rv 00:21:33.570 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.570 [2024-12-05 13:25:55.977716] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:33.570 [2024-12-05 13:25:55.993720] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.570 [2024-12-05 13:25:55.994059] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.570 malloc0 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=963301 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 963301 /var/tmp/bdevperf.sock 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 963301 ']' 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.570 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:33.570 [2024-12-05 13:25:56.120506] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:33.570 [2024-12-05 13:25:56.120575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid963301 ] 00:21:33.865 [2024-12-05 13:25:56.189743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.865 [2024-12-05 13:25:56.225843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.865 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.865 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:21:33.865 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.6rv 00:21:34.127 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:34.127 [2024-12-05 13:25:56.656105] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.388 TLSTESTn1 00:21:34.388 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:34.389 Running I/O for 10 seconds... 00:21:36.720 4945.00 IOPS, 19.32 MiB/s [2024-12-05T12:25:59.860Z] 4941.50 IOPS, 19.30 MiB/s [2024-12-05T12:26:01.244Z] 4855.00 IOPS, 18.96 MiB/s [2024-12-05T12:26:02.191Z] 4718.00 IOPS, 18.43 MiB/s [2024-12-05T12:26:03.132Z] 4712.80 IOPS, 18.41 MiB/s [2024-12-05T12:26:04.102Z] 4705.50 IOPS, 18.38 MiB/s [2024-12-05T12:26:05.042Z] 4721.00 IOPS, 18.44 MiB/s [2024-12-05T12:26:05.981Z] 4718.62 IOPS, 18.43 MiB/s [2024-12-05T12:26:06.920Z] 4720.22 IOPS, 18.44 MiB/s [2024-12-05T12:26:06.920Z] 4710.80 IOPS, 18.40 MiB/s 00:21:44.352 Latency(us) 00:21:44.352 [2024-12-05T12:26:06.920Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.352 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:44.352 Verification LBA range: start 0x0 length 0x2000 00:21:44.352 TLSTESTn1 : 10.03 4710.69 18.40 0.00 0.00 27125.78 6444.37 45219.84 00:21:44.352 [2024-12-05T12:26:06.920Z] =================================================================================================================== 00:21:44.352 [2024-12-05T12:26:06.920Z] Total : 4710.69 18.40 0.00 0.00 27125.78 6444.37 45219.84 00:21:44.352 { 00:21:44.352 "results": [ 00:21:44.352 { 00:21:44.352 "job": "TLSTESTn1", 00:21:44.352 "core_mask": "0x4", 00:21:44.352 "workload": "verify", 00:21:44.352 "status": "finished", 00:21:44.352 "verify_range": { 00:21:44.352 "start": 0, 00:21:44.352 "length": 8192 00:21:44.352 }, 00:21:44.352 "queue_depth": 128, 00:21:44.352 "io_size": 4096, 00:21:44.352 "runtime": 10.027198, 00:21:44.352 "iops": 4710.687871128106, 00:21:44.352 "mibps": 18.401124496594164, 00:21:44.352 "io_failed": 0, 00:21:44.352 "io_timeout": 0, 00:21:44.352 "avg_latency_us": 27125.77669270668, 00:21:44.352 "min_latency_us": 6444.373333333333, 00:21:44.352 "max_latency_us": 45219.84 00:21:44.352 } 00:21:44.352 ], 00:21:44.352 "core_count": 1 00:21:44.352 } 00:21:44.352 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:44.352 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:44.352 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:21:44.352 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:21:44.352 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:44.613 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:44.613 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:44.613 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:44.613 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:44.613 13:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:44.613 nvmf_trace.0 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 963301 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 963301 ']' 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 963301 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 963301 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 963301' 00:21:44.613 killing process with pid 963301 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 963301 00:21:44.613 Received shutdown signal, test time was about 10.000000 seconds 00:21:44.613 00:21:44.613 Latency(us) 00:21:44.613 [2024-12-05T12:26:07.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.613 [2024-12-05T12:26:07.181Z] =================================================================================================================== 00:21:44.613 [2024-12-05T12:26:07.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:44.613 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 963301 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.874 rmmod nvme_tcp 00:21:44.874 rmmod nvme_fabrics 00:21:44.874 rmmod nvme_keyring 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 962949 ']' 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 962949 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 962949 ']' 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 962949 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:21:44.874 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 962949 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 962949' 00:21:44.875 killing process with pid 962949 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 962949 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 962949 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:44.875 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:45.136 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:45.136 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:45.136 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.136 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.136 13:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.6rv 00:21:47.053 00:21:47.053 real 0m23.586s 00:21:47.053 user 0m23.230s 00:21:47.053 sys 0m10.900s 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:47.053 ************************************ 00:21:47.053 END TEST nvmf_fips 00:21:47.053 ************************************ 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:47.053 ************************************ 00:21:47.053 START TEST nvmf_control_msg_list 00:21:47.053 ************************************ 00:21:47.053 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:47.314 * Looking for test storage... 00:21:47.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:47.314 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:47.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.315 --rc genhtml_branch_coverage=1 00:21:47.315 --rc genhtml_function_coverage=1 00:21:47.315 --rc genhtml_legend=1 00:21:47.315 --rc geninfo_all_blocks=1 00:21:47.315 --rc geninfo_unexecuted_blocks=1 00:21:47.315 00:21:47.315 ' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:47.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.315 --rc genhtml_branch_coverage=1 00:21:47.315 --rc genhtml_function_coverage=1 00:21:47.315 --rc genhtml_legend=1 00:21:47.315 --rc geninfo_all_blocks=1 00:21:47.315 --rc geninfo_unexecuted_blocks=1 00:21:47.315 00:21:47.315 ' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:47.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.315 --rc genhtml_branch_coverage=1 00:21:47.315 --rc genhtml_function_coverage=1 00:21:47.315 --rc genhtml_legend=1 00:21:47.315 --rc geninfo_all_blocks=1 00:21:47.315 --rc geninfo_unexecuted_blocks=1 00:21:47.315 00:21:47.315 ' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:47.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:47.315 --rc genhtml_branch_coverage=1 00:21:47.315 --rc genhtml_function_coverage=1 00:21:47.315 --rc genhtml_legend=1 00:21:47.315 --rc geninfo_all_blocks=1 00:21:47.315 --rc geninfo_unexecuted_blocks=1 00:21:47.315 00:21:47.315 ' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:47.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.315 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.316 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.316 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:47.316 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:47.316 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:47.316 13:26:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:21:55.477 Found 0000:31:00.0 (0x8086 - 0x159b) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:21:55.477 Found 0000:31:00.1 (0x8086 - 0x159b) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:21:55.477 Found net devices under 0000:31:00.0: cvl_0_0 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:21:55.477 Found net devices under 0000:31:00.1: cvl_0_1 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:55.477 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:55.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:21:55.478 00:21:55.478 --- 10.0.0.2 ping statistics --- 00:21:55.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.478 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:21:55.478 00:21:55.478 --- 10.0.0.1 ping statistics --- 00:21:55.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.478 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=970010 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 970010 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 970010 ']' 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.478 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:55.478 [2024-12-05 13:26:17.853970] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:21:55.478 [2024-12-05 13:26:17.854039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.478 [2024-12-05 13:26:17.946084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.478 [2024-12-05 13:26:17.986830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.478 [2024-12-05 13:26:17.986876] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.478 [2024-12-05 13:26:17.986890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.478 [2024-12-05 13:26:17.986896] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.478 [2024-12-05 13:26:17.986902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.478 [2024-12-05 13:26:17.987528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:56.420 [2024-12-05 13:26:18.682702] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:56.420 Malloc0 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:56.420 [2024-12-05 13:26:18.733560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=970354 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=970355 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=970356 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 970354 00:21:56.420 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:56.420 [2024-12-05 13:26:18.803897] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:56.420 [2024-12-05 13:26:18.823996] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:56.420 [2024-12-05 13:26:18.824259] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:57.361 Initializing NVMe Controllers 00:21:57.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:57.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:57.361 Initialization complete. Launching workers. 00:21:57.361 ======================================================== 00:21:57.361 Latency(us) 00:21:57.361 Device Information : IOPS MiB/s Average min max 00:21:57.361 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40896.04 40724.85 40998.68 00:21:57.361 ======================================================== 00:21:57.361 Total : 25.00 0.10 40896.04 40724.85 40998.68 00:21:57.361 00:21:57.620 Initializing NVMe Controllers 00:21:57.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:57.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:57.621 Initialization complete. Launching workers. 00:21:57.621 ======================================================== 00:21:57.621 Latency(us) 00:21:57.621 Device Information : IOPS MiB/s Average min max 00:21:57.621 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1552.00 6.06 644.37 269.20 904.42 00:21:57.621 ======================================================== 00:21:57.621 Total : 1552.00 6.06 644.37 269.20 904.42 00:21:57.621 00:21:57.621 [2024-12-05 13:26:19.928225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1650 is same with the state(6) to be set 00:21:57.621 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 970355 00:21:57.621 Initializing NVMe Controllers 00:21:57.621 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:57.621 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:57.621 Initialization complete. Launching workers. 00:21:57.621 ======================================================== 00:21:57.621 Latency(us) 00:21:57.621 Device Information : IOPS MiB/s Average min max 00:21:57.621 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1514.00 5.91 660.47 149.26 40560.54 00:21:57.621 ======================================================== 00:21:57.621 Total : 1514.00 5.91 660.47 149.26 40560.54 00:21:57.621 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 970356 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.621 rmmod nvme_tcp 00:21:57.621 rmmod nvme_fabrics 00:21:57.621 rmmod nvme_keyring 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 970010 ']' 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 970010 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 970010 ']' 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 970010 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:57.621 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 970010 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 970010' 00:21:57.881 killing process with pid 970010 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 970010 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 970010 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.881 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:00.427 00:22:00.427 real 0m12.850s 00:22:00.427 user 0m8.044s 00:22:00.427 sys 0m6.910s 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:00.427 ************************************ 00:22:00.427 END TEST nvmf_control_msg_list 00:22:00.427 ************************************ 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:00.427 ************************************ 00:22:00.427 START TEST nvmf_wait_for_buf 00:22:00.427 ************************************ 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:00.427 * Looking for test storage... 00:22:00.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.427 --rc genhtml_branch_coverage=1 00:22:00.427 --rc genhtml_function_coverage=1 00:22:00.427 --rc genhtml_legend=1 00:22:00.427 --rc geninfo_all_blocks=1 00:22:00.427 --rc geninfo_unexecuted_blocks=1 00:22:00.427 00:22:00.427 ' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.427 --rc genhtml_branch_coverage=1 00:22:00.427 --rc genhtml_function_coverage=1 00:22:00.427 --rc genhtml_legend=1 00:22:00.427 --rc geninfo_all_blocks=1 00:22:00.427 --rc geninfo_unexecuted_blocks=1 00:22:00.427 00:22:00.427 ' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.427 --rc genhtml_branch_coverage=1 00:22:00.427 --rc genhtml_function_coverage=1 00:22:00.427 --rc genhtml_legend=1 00:22:00.427 --rc geninfo_all_blocks=1 00:22:00.427 --rc geninfo_unexecuted_blocks=1 00:22:00.427 00:22:00.427 ' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:00.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:00.427 --rc genhtml_branch_coverage=1 00:22:00.427 --rc genhtml_function_coverage=1 00:22:00.427 --rc genhtml_legend=1 00:22:00.427 --rc geninfo_all_blocks=1 00:22:00.427 --rc geninfo_unexecuted_blocks=1 00:22:00.427 00:22:00.427 ' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.427 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:00.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:00.428 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:08.572 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:08.572 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:08.572 Found net devices under 0000:31:00.0: cvl_0_0 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:08.572 Found net devices under 0000:31:00.1: cvl_0_1 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.572 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.572 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.572 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.572 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.572 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.833 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.833 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:22:08.833 00:22:08.833 --- 10.0.0.2 ping statistics --- 00:22:08.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.833 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.833 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.833 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:22:08.833 00:22:08.833 --- 10.0.0.1 ping statistics --- 00:22:08.833 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.833 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=975391 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 975391 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 975391 ']' 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.833 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:08.833 [2024-12-05 13:26:31.278727] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:22:08.833 [2024-12-05 13:26:31.278778] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.833 [2024-12-05 13:26:31.365208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.145 [2024-12-05 13:26:31.399571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.145 [2024-12-05 13:26:31.399601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.145 [2024-12-05 13:26:31.399609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.145 [2024-12-05 13:26:31.399616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.145 [2024-12-05 13:26:31.399621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.145 [2024-12-05 13:26:31.400192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 Malloc0 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 [2024-12-05 13:26:32.216424] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.713 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:09.714 [2024-12-05 13:26:32.252662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.714 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.714 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:09.972 [2024-12-05 13:26:32.359951] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:11.354 Initializing NVMe Controllers 00:22:11.354 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:11.354 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:11.354 Initialization complete. Launching workers. 00:22:11.354 ======================================================== 00:22:11.354 Latency(us) 00:22:11.354 Device Information : IOPS MiB/s Average min max 00:22:11.354 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 123.55 15.44 33533.40 8000.21 71833.94 00:22:11.354 ======================================================== 00:22:11.354 Total : 123.55 15.44 33533.40 8000.21 71833.94 00:22:11.354 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:11.354 rmmod nvme_tcp 00:22:11.354 rmmod nvme_fabrics 00:22:11.354 rmmod nvme_keyring 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 975391 ']' 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 975391 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 975391 ']' 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 975391 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.354 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 975391 00:22:11.614 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.614 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.614 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 975391' 00:22:11.614 killing process with pid 975391 00:22:11.614 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 975391 00:22:11.614 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 975391 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.614 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.615 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:11.615 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.615 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.615 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:14.158 00:22:14.158 real 0m13.666s 00:22:14.158 user 0m5.463s 00:22:14.158 sys 0m6.762s 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:14.158 ************************************ 00:22:14.158 END TEST nvmf_wait_for_buf 00:22:14.158 ************************************ 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:14.158 13:26:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:22.299 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:22.299 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:22.299 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:22.300 Found net devices under 0000:31:00.0: cvl_0_0 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:22.300 Found net devices under 0000:31:00.1: cvl_0_1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:22.300 ************************************ 00:22:22.300 START TEST nvmf_perf_adq 00:22:22.300 ************************************ 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:22.300 * Looking for test storage... 00:22:22.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:22.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.300 --rc genhtml_branch_coverage=1 00:22:22.300 --rc genhtml_function_coverage=1 00:22:22.300 --rc genhtml_legend=1 00:22:22.300 --rc geninfo_all_blocks=1 00:22:22.300 --rc geninfo_unexecuted_blocks=1 00:22:22.300 00:22:22.300 ' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:22.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.300 --rc genhtml_branch_coverage=1 00:22:22.300 --rc genhtml_function_coverage=1 00:22:22.300 --rc genhtml_legend=1 00:22:22.300 --rc geninfo_all_blocks=1 00:22:22.300 --rc geninfo_unexecuted_blocks=1 00:22:22.300 00:22:22.300 ' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:22.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.300 --rc genhtml_branch_coverage=1 00:22:22.300 --rc genhtml_function_coverage=1 00:22:22.300 --rc genhtml_legend=1 00:22:22.300 --rc geninfo_all_blocks=1 00:22:22.300 --rc geninfo_unexecuted_blocks=1 00:22:22.300 00:22:22.300 ' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:22.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.300 --rc genhtml_branch_coverage=1 00:22:22.300 --rc genhtml_function_coverage=1 00:22:22.300 --rc genhtml_legend=1 00:22:22.300 --rc geninfo_all_blocks=1 00:22:22.300 --rc geninfo_unexecuted_blocks=1 00:22:22.300 00:22:22.300 ' 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:22.300 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:22.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.301 13:26:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.440 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:30.441 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:30.441 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:30.441 Found net devices under 0000:31:00.0: cvl_0_0 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:30.441 Found net devices under 0000:31:00.1: cvl_0_1 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:30.441 13:26:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:31.852 13:26:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:33.767 13:26:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:39.062 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:39.062 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:39.062 Found net devices under 0000:31:00.0: cvl_0_0 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:39.062 Found net devices under 0000:31:00.1: cvl_0_1 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.062 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.063 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.063 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.063 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:39.063 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:39.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:22:39.325 00:22:39.325 --- 10.0.0.2 ping statistics --- 00:22:39.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.325 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:22:39.325 00:22:39.325 --- 10.0.0.1 ping statistics --- 00:22:39.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.325 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=986757 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 986757 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 986757 ']' 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:39.325 13:27:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.325 [2024-12-05 13:27:01.792131] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:22:39.325 [2024-12-05 13:27:01.792196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.325 [2024-12-05 13:27:01.887022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.586 [2024-12-05 13:27:01.930081] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.586 [2024-12-05 13:27:01.930121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.586 [2024-12-05 13:27:01.930129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.586 [2024-12-05 13:27:01.930136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.586 [2024-12-05 13:27:01.930142] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.586 [2024-12-05 13:27:01.931993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.586 [2024-12-05 13:27:01.932115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.586 [2024-12-05 13:27:01.932274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.586 [2024-12-05 13:27:01.932275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.158 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.426 [2024-12-05 13:27:02.783467] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.426 Malloc1 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:40.426 [2024-12-05 13:27:02.857283] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=987039 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:40.426 13:27:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:42.554 "tick_rate": 2400000000, 00:22:42.554 "poll_groups": [ 00:22:42.554 { 00:22:42.554 "name": "nvmf_tgt_poll_group_000", 00:22:42.554 "admin_qpairs": 1, 00:22:42.554 "io_qpairs": 1, 00:22:42.554 "current_admin_qpairs": 1, 00:22:42.554 "current_io_qpairs": 1, 00:22:42.554 "pending_bdev_io": 0, 00:22:42.554 "completed_nvme_io": 20801, 00:22:42.554 "transports": [ 00:22:42.554 { 00:22:42.554 "trtype": "TCP" 00:22:42.554 } 00:22:42.554 ] 00:22:42.554 }, 00:22:42.554 { 00:22:42.554 "name": "nvmf_tgt_poll_group_001", 00:22:42.554 "admin_qpairs": 0, 00:22:42.554 "io_qpairs": 1, 00:22:42.554 "current_admin_qpairs": 0, 00:22:42.554 "current_io_qpairs": 1, 00:22:42.554 "pending_bdev_io": 0, 00:22:42.554 "completed_nvme_io": 28019, 00:22:42.554 "transports": [ 00:22:42.554 { 00:22:42.554 "trtype": "TCP" 00:22:42.554 } 00:22:42.554 ] 00:22:42.554 }, 00:22:42.554 { 00:22:42.554 "name": "nvmf_tgt_poll_group_002", 00:22:42.554 "admin_qpairs": 0, 00:22:42.554 "io_qpairs": 1, 00:22:42.554 "current_admin_qpairs": 0, 00:22:42.554 "current_io_qpairs": 1, 00:22:42.554 "pending_bdev_io": 0, 00:22:42.554 "completed_nvme_io": 22432, 00:22:42.554 "transports": [ 00:22:42.554 { 00:22:42.554 "trtype": "TCP" 00:22:42.554 } 00:22:42.554 ] 00:22:42.554 }, 00:22:42.554 { 00:22:42.554 "name": "nvmf_tgt_poll_group_003", 00:22:42.554 "admin_qpairs": 0, 00:22:42.554 "io_qpairs": 1, 00:22:42.554 "current_admin_qpairs": 0, 00:22:42.554 "current_io_qpairs": 1, 00:22:42.554 "pending_bdev_io": 0, 00:22:42.554 "completed_nvme_io": 20769, 00:22:42.554 "transports": [ 00:22:42.554 { 00:22:42.554 "trtype": "TCP" 00:22:42.554 } 00:22:42.554 ] 00:22:42.554 } 00:22:42.554 ] 00:22:42.554 }' 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:42.554 13:27:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 987039 00:22:50.688 Initializing NVMe Controllers 00:22:50.688 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.688 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.688 Initialization complete. Launching workers. 00:22:50.688 ======================================================== 00:22:50.688 Latency(us) 00:22:50.688 Device Information : IOPS MiB/s Average min max 00:22:50.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14220.50 55.55 4500.98 1431.83 9598.23 00:22:50.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14811.60 57.86 4320.97 1323.26 9698.80 00:22:50.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14036.90 54.83 4558.88 1412.50 11018.10 00:22:50.688 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11573.50 45.21 5529.59 1833.25 11175.90 00:22:50.688 ======================================================== 00:22:50.688 Total : 54642.49 213.45 4684.92 1323.26 11175.90 00:22:50.688 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:50.688 rmmod nvme_tcp 00:22:50.688 rmmod nvme_fabrics 00:22:50.688 rmmod nvme_keyring 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 986757 ']' 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 986757 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 986757 ']' 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 986757 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 986757 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 986757' 00:22:50.688 killing process with pid 986757 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 986757 00:22:50.688 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 986757 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.948 13:27:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.858 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:52.858 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:52.858 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:52.858 13:27:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:54.768 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:56.688 13:27:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:01.978 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:01.979 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:01.979 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:01.979 Found net devices under 0000:31:00.0: cvl_0_0 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:01.979 Found net devices under 0000:31:00.1: cvl_0_1 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:01.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:23:01.979 00:23:01.979 --- 10.0.0.2 ping statistics --- 00:23:01.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.979 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:23:01.979 00:23:01.979 --- 10.0.0.1 ping statistics --- 00:23:01.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.979 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:01.979 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:01.980 net.core.busy_poll = 1 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:01.980 net.core.busy_read = 1 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:01.980 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=992041 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 992041 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 992041 ']' 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.241 13:27:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:02.501 [2024-12-05 13:27:24.869057] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:02.501 [2024-12-05 13:27:24.869118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.501 [2024-12-05 13:27:24.959311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.501 [2024-12-05 13:27:25.000024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.501 [2024-12-05 13:27:25.000061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.501 [2024-12-05 13:27:25.000069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.501 [2024-12-05 13:27:25.000077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.501 [2024-12-05 13:27:25.000083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.501 [2024-12-05 13:27:25.001690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.501 [2024-12-05 13:27:25.001824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.501 [2024-12-05 13:27:25.001983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.501 [2024-12-05 13:27:25.002127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 [2024-12-05 13:27:25.833026] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 Malloc1 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:03.446 [2024-12-05 13:27:25.905264] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=992393 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:03.446 13:27:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:05.359 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:05.359 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.359 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.620 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.620 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:05.620 "tick_rate": 2400000000, 00:23:05.620 "poll_groups": [ 00:23:05.620 { 00:23:05.620 "name": "nvmf_tgt_poll_group_000", 00:23:05.620 "admin_qpairs": 1, 00:23:05.620 "io_qpairs": 3, 00:23:05.620 "current_admin_qpairs": 1, 00:23:05.620 "current_io_qpairs": 3, 00:23:05.620 "pending_bdev_io": 0, 00:23:05.620 "completed_nvme_io": 29607, 00:23:05.620 "transports": [ 00:23:05.620 { 00:23:05.620 "trtype": "TCP" 00:23:05.620 } 00:23:05.620 ] 00:23:05.620 }, 00:23:05.620 { 00:23:05.620 "name": "nvmf_tgt_poll_group_001", 00:23:05.620 "admin_qpairs": 0, 00:23:05.620 "io_qpairs": 1, 00:23:05.620 "current_admin_qpairs": 0, 00:23:05.620 "current_io_qpairs": 1, 00:23:05.620 "pending_bdev_io": 0, 00:23:05.620 "completed_nvme_io": 39286, 00:23:05.620 "transports": [ 00:23:05.620 { 00:23:05.620 "trtype": "TCP" 00:23:05.620 } 00:23:05.620 ] 00:23:05.620 }, 00:23:05.621 { 00:23:05.621 "name": "nvmf_tgt_poll_group_002", 00:23:05.621 "admin_qpairs": 0, 00:23:05.621 "io_qpairs": 0, 00:23:05.621 "current_admin_qpairs": 0, 00:23:05.621 "current_io_qpairs": 0, 00:23:05.621 "pending_bdev_io": 0, 00:23:05.621 "completed_nvme_io": 0, 00:23:05.621 "transports": [ 00:23:05.621 { 00:23:05.621 "trtype": "TCP" 00:23:05.621 } 00:23:05.621 ] 00:23:05.621 }, 00:23:05.621 { 00:23:05.621 "name": "nvmf_tgt_poll_group_003", 00:23:05.621 "admin_qpairs": 0, 00:23:05.621 "io_qpairs": 0, 00:23:05.621 "current_admin_qpairs": 0, 00:23:05.621 "current_io_qpairs": 0, 00:23:05.621 "pending_bdev_io": 0, 00:23:05.621 "completed_nvme_io": 0, 00:23:05.621 "transports": [ 00:23:05.621 { 00:23:05.621 "trtype": "TCP" 00:23:05.621 } 00:23:05.621 ] 00:23:05.621 } 00:23:05.621 ] 00:23:05.621 }' 00:23:05.621 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:05.621 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:05.621 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:05.621 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:05.621 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 992393 00:23:13.762 Initializing NVMe Controllers 00:23:13.762 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:13.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:13.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:13.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:13.762 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:13.762 Initialization complete. Launching workers. 00:23:13.762 ======================================================== 00:23:13.762 Latency(us) 00:23:13.762 Device Information : IOPS MiB/s Average min max 00:23:13.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7250.30 28.32 8827.42 1413.22 57645.58 00:23:13.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 21110.60 82.46 3031.26 1097.75 45562.25 00:23:13.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6029.00 23.55 10615.09 1396.23 61348.79 00:23:13.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6628.40 25.89 9680.18 1537.41 57140.03 00:23:13.763 ======================================================== 00:23:13.763 Total : 41018.30 160.23 6244.91 1097.75 61348.79 00:23:13.763 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:13.763 rmmod nvme_tcp 00:23:13.763 rmmod nvme_fabrics 00:23:13.763 rmmod nvme_keyring 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 992041 ']' 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 992041 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 992041 ']' 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 992041 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 992041 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 992041' 00:23:13.763 killing process with pid 992041 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 992041 00:23:13.763 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 992041 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.024 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:17.321 00:23:17.321 real 0m55.290s 00:23:17.321 user 2m50.154s 00:23:17.321 sys 0m12.176s 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.321 ************************************ 00:23:17.321 END TEST nvmf_perf_adq 00:23:17.321 ************************************ 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:17.321 ************************************ 00:23:17.321 START TEST nvmf_shutdown 00:23:17.321 ************************************ 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:17.321 * Looking for test storage... 00:23:17.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.321 --rc genhtml_branch_coverage=1 00:23:17.321 --rc genhtml_function_coverage=1 00:23:17.321 --rc genhtml_legend=1 00:23:17.321 --rc geninfo_all_blocks=1 00:23:17.321 --rc geninfo_unexecuted_blocks=1 00:23:17.321 00:23:17.321 ' 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.321 --rc genhtml_branch_coverage=1 00:23:17.321 --rc genhtml_function_coverage=1 00:23:17.321 --rc genhtml_legend=1 00:23:17.321 --rc geninfo_all_blocks=1 00:23:17.321 --rc geninfo_unexecuted_blocks=1 00:23:17.321 00:23:17.321 ' 00:23:17.321 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:17.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.321 --rc genhtml_branch_coverage=1 00:23:17.321 --rc genhtml_function_coverage=1 00:23:17.321 --rc genhtml_legend=1 00:23:17.322 --rc geninfo_all_blocks=1 00:23:17.322 --rc geninfo_unexecuted_blocks=1 00:23:17.322 00:23:17.322 ' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:17.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:17.322 --rc genhtml_branch_coverage=1 00:23:17.322 --rc genhtml_function_coverage=1 00:23:17.322 --rc genhtml_legend=1 00:23:17.322 --rc geninfo_all_blocks=1 00:23:17.322 --rc geninfo_unexecuted_blocks=1 00:23:17.322 00:23:17.322 ' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:17.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:17.322 ************************************ 00:23:17.322 START TEST nvmf_shutdown_tc1 00:23:17.322 ************************************ 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:17.322 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:25.464 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:25.464 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:25.464 Found net devices under 0000:31:00.0: cvl_0_0 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:25.464 Found net devices under 0000:31:00.1: cvl_0_1 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:25.464 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:25.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:23:25.465 00:23:25.465 --- 10.0.0.2 ping statistics --- 00:23:25.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.465 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:23:25.465 00:23:25.465 --- 10.0.0.1 ping statistics --- 00:23:25.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.465 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=999208 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 999208 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 999208 ']' 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:25.465 13:27:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:25.465 [2024-12-05 13:27:47.794748] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:25.465 [2024-12-05 13:27:47.794818] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.465 [2024-12-05 13:27:47.904673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:25.465 [2024-12-05 13:27:47.956226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.465 [2024-12-05 13:27:47.956280] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.465 [2024-12-05 13:27:47.956289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.465 [2024-12-05 13:27:47.956296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.465 [2024-12-05 13:27:47.956302] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.465 [2024-12-05 13:27:47.958528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.465 [2024-12-05 13:27:47.958704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:25.465 [2024-12-05 13:27:47.958856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:25.465 [2024-12-05 13:27:47.958856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.036 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.036 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:26.036 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.036 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.036 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.296 [2024-12-05 13:27:48.647462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.296 13:27:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.296 Malloc1 00:23:26.296 [2024-12-05 13:27:48.771170] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.296 Malloc2 00:23:26.296 Malloc3 00:23:26.557 Malloc4 00:23:26.557 Malloc5 00:23:26.557 Malloc6 00:23:26.557 Malloc7 00:23:26.557 Malloc8 00:23:26.557 Malloc9 00:23:26.557 Malloc10 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=999593 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 999593 /var/tmp/bdevperf.sock 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 999593 ']' 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.818 { 00:23:26.818 "params": { 00:23:26.818 "name": "Nvme$subsystem", 00:23:26.818 "trtype": "$TEST_TRANSPORT", 00:23:26.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.818 "adrfam": "ipv4", 00:23:26.818 "trsvcid": "$NVMF_PORT", 00:23:26.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.818 "hdgst": ${hdgst:-false}, 00:23:26.818 "ddgst": ${ddgst:-false} 00:23:26.818 }, 00:23:26.818 "method": "bdev_nvme_attach_controller" 00:23:26.818 } 00:23:26.818 EOF 00:23:26.818 )") 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.818 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.818 { 00:23:26.818 "params": { 00:23:26.818 "name": "Nvme$subsystem", 00:23:26.818 "trtype": "$TEST_TRANSPORT", 00:23:26.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.818 "adrfam": "ipv4", 00:23:26.818 "trsvcid": "$NVMF_PORT", 00:23:26.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.818 "hdgst": ${hdgst:-false}, 00:23:26.818 "ddgst": ${ddgst:-false} 00:23:26.818 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 [2024-12-05 13:27:49.223884] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:26.819 [2024-12-05 13:27:49.223939] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:26.819 { 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme$subsystem", 00:23:26.819 "trtype": "$TEST_TRANSPORT", 00:23:26.819 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "$NVMF_PORT", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:26.819 "hdgst": ${hdgst:-false}, 00:23:26.819 "ddgst": ${ddgst:-false} 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 } 00:23:26.819 EOF 00:23:26.819 )") 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:26.819 13:27:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme1", 00:23:26.819 "trtype": "tcp", 00:23:26.819 "traddr": "10.0.0.2", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "4420", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:26.819 "hdgst": false, 00:23:26.819 "ddgst": false 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 },{ 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme2", 00:23:26.819 "trtype": "tcp", 00:23:26.819 "traddr": "10.0.0.2", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "4420", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:26.819 "hdgst": false, 00:23:26.819 "ddgst": false 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 },{ 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme3", 00:23:26.819 "trtype": "tcp", 00:23:26.819 "traddr": "10.0.0.2", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "4420", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:26.819 "hdgst": false, 00:23:26.819 "ddgst": false 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.819 },{ 00:23:26.819 "params": { 00:23:26.819 "name": "Nvme4", 00:23:26.819 "trtype": "tcp", 00:23:26.819 "traddr": "10.0.0.2", 00:23:26.819 "adrfam": "ipv4", 00:23:26.819 "trsvcid": "4420", 00:23:26.819 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:26.819 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:26.819 "hdgst": false, 00:23:26.819 "ddgst": false 00:23:26.819 }, 00:23:26.819 "method": "bdev_nvme_attach_controller" 00:23:26.820 },{ 00:23:26.820 "params": { 00:23:26.820 "name": "Nvme5", 00:23:26.820 "trtype": "tcp", 00:23:26.820 "traddr": "10.0.0.2", 00:23:26.820 "adrfam": "ipv4", 00:23:26.820 "trsvcid": "4420", 00:23:26.820 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:26.820 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:26.820 "hdgst": false, 00:23:26.820 "ddgst": false 00:23:26.820 }, 00:23:26.820 "method": "bdev_nvme_attach_controller" 00:23:26.820 },{ 00:23:26.820 "params": { 00:23:26.820 "name": "Nvme6", 00:23:26.820 "trtype": "tcp", 00:23:26.820 "traddr": "10.0.0.2", 00:23:26.820 "adrfam": "ipv4", 00:23:26.820 "trsvcid": "4420", 00:23:26.820 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:26.820 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:26.820 "hdgst": false, 00:23:26.820 "ddgst": false 00:23:26.820 }, 00:23:26.820 "method": "bdev_nvme_attach_controller" 00:23:26.820 },{ 00:23:26.820 "params": { 00:23:26.820 "name": "Nvme7", 00:23:26.820 "trtype": "tcp", 00:23:26.820 "traddr": "10.0.0.2", 00:23:26.820 "adrfam": "ipv4", 00:23:26.820 "trsvcid": "4420", 00:23:26.820 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:26.820 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:26.820 "hdgst": false, 00:23:26.820 "ddgst": false 00:23:26.820 }, 00:23:26.820 "method": "bdev_nvme_attach_controller" 00:23:26.820 },{ 00:23:26.820 "params": { 00:23:26.820 "name": "Nvme8", 00:23:26.820 "trtype": "tcp", 00:23:26.820 "traddr": "10.0.0.2", 00:23:26.820 "adrfam": "ipv4", 00:23:26.820 "trsvcid": "4420", 00:23:26.820 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:26.820 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:26.820 "hdgst": false, 00:23:26.820 "ddgst": false 00:23:26.820 }, 00:23:26.820 "method": "bdev_nvme_attach_controller" 00:23:26.820 },{ 00:23:26.820 "params": { 00:23:26.820 "name": "Nvme9", 00:23:26.820 "trtype": "tcp", 00:23:26.820 "traddr": "10.0.0.2", 00:23:26.820 "adrfam": "ipv4", 00:23:26.820 "trsvcid": "4420", 00:23:26.820 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:26.820 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:26.820 "hdgst": false, 00:23:26.820 "ddgst": false 00:23:26.820 }, 00:23:26.820 "method": "bdev_nvme_attach_controller" 00:23:26.820 },{ 00:23:26.820 "params": { 00:23:26.820 "name": "Nvme10", 00:23:26.820 "trtype": "tcp", 00:23:26.820 "traddr": "10.0.0.2", 00:23:26.820 "adrfam": "ipv4", 00:23:26.820 "trsvcid": "4420", 00:23:26.820 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:26.820 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:26.820 "hdgst": false, 00:23:26.820 "ddgst": false 00:23:26.820 }, 00:23:26.820 "method": "bdev_nvme_attach_controller" 00:23:26.820 }' 00:23:26.820 [2024-12-05 13:27:49.302836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.820 [2024-12-05 13:27:49.339265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.732 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:28.732 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:28.732 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:28.733 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.733 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:28.733 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.733 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 999593 00:23:28.733 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:28.733 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 999593 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:28.733 13:27:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:29.304 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 999208 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.305 { 00:23:29.305 "params": { 00:23:29.305 "name": "Nvme$subsystem", 00:23:29.305 "trtype": "$TEST_TRANSPORT", 00:23:29.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.305 "adrfam": "ipv4", 00:23:29.305 "trsvcid": "$NVMF_PORT", 00:23:29.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.305 "hdgst": ${hdgst:-false}, 00:23:29.305 "ddgst": ${ddgst:-false} 00:23:29.305 }, 00:23:29.305 "method": "bdev_nvme_attach_controller" 00:23:29.305 } 00:23:29.305 EOF 00:23:29.305 )") 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.305 { 00:23:29.305 "params": { 00:23:29.305 "name": "Nvme$subsystem", 00:23:29.305 "trtype": "$TEST_TRANSPORT", 00:23:29.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.305 "adrfam": "ipv4", 00:23:29.305 "trsvcid": "$NVMF_PORT", 00:23:29.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.305 "hdgst": ${hdgst:-false}, 00:23:29.305 "ddgst": ${ddgst:-false} 00:23:29.305 }, 00:23:29.305 "method": "bdev_nvme_attach_controller" 00:23:29.305 } 00:23:29.305 EOF 00:23:29.305 )") 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.305 { 00:23:29.305 "params": { 00:23:29.305 "name": "Nvme$subsystem", 00:23:29.305 "trtype": "$TEST_TRANSPORT", 00:23:29.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.305 "adrfam": "ipv4", 00:23:29.305 "trsvcid": "$NVMF_PORT", 00:23:29.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.305 "hdgst": ${hdgst:-false}, 00:23:29.305 "ddgst": ${ddgst:-false} 00:23:29.305 }, 00:23:29.305 "method": "bdev_nvme_attach_controller" 00:23:29.305 } 00:23:29.305 EOF 00:23:29.305 )") 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.305 { 00:23:29.305 "params": { 00:23:29.305 "name": "Nvme$subsystem", 00:23:29.305 "trtype": "$TEST_TRANSPORT", 00:23:29.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.305 "adrfam": "ipv4", 00:23:29.305 "trsvcid": "$NVMF_PORT", 00:23:29.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.305 "hdgst": ${hdgst:-false}, 00:23:29.305 "ddgst": ${ddgst:-false} 00:23:29.305 }, 00:23:29.305 "method": "bdev_nvme_attach_controller" 00:23:29.305 } 00:23:29.305 EOF 00:23:29.305 )") 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.305 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.305 { 00:23:29.305 "params": { 00:23:29.305 "name": "Nvme$subsystem", 00:23:29.305 "trtype": "$TEST_TRANSPORT", 00:23:29.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.305 "adrfam": "ipv4", 00:23:29.305 "trsvcid": "$NVMF_PORT", 00:23:29.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.305 "hdgst": ${hdgst:-false}, 00:23:29.305 "ddgst": ${ddgst:-false} 00:23:29.305 }, 00:23:29.305 "method": "bdev_nvme_attach_controller" 00:23:29.305 } 00:23:29.305 EOF 00:23:29.305 )") 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.566 { 00:23:29.566 "params": { 00:23:29.566 "name": "Nvme$subsystem", 00:23:29.566 "trtype": "$TEST_TRANSPORT", 00:23:29.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.566 "adrfam": "ipv4", 00:23:29.566 "trsvcid": "$NVMF_PORT", 00:23:29.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.566 "hdgst": ${hdgst:-false}, 00:23:29.566 "ddgst": ${ddgst:-false} 00:23:29.566 }, 00:23:29.566 "method": "bdev_nvme_attach_controller" 00:23:29.566 } 00:23:29.566 EOF 00:23:29.566 )") 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.566 { 00:23:29.566 "params": { 00:23:29.566 "name": "Nvme$subsystem", 00:23:29.566 "trtype": "$TEST_TRANSPORT", 00:23:29.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.566 "adrfam": "ipv4", 00:23:29.566 "trsvcid": "$NVMF_PORT", 00:23:29.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.566 "hdgst": ${hdgst:-false}, 00:23:29.566 "ddgst": ${ddgst:-false} 00:23:29.566 }, 00:23:29.566 "method": "bdev_nvme_attach_controller" 00:23:29.566 } 00:23:29.566 EOF 00:23:29.566 )") 00:23:29.566 [2024-12-05 13:27:51.886879] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:29.566 [2024-12-05 13:27:51.886933] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000036 ] 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.566 { 00:23:29.566 "params": { 00:23:29.566 "name": "Nvme$subsystem", 00:23:29.566 "trtype": "$TEST_TRANSPORT", 00:23:29.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.566 "adrfam": "ipv4", 00:23:29.566 "trsvcid": "$NVMF_PORT", 00:23:29.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.566 "hdgst": ${hdgst:-false}, 00:23:29.566 "ddgst": ${ddgst:-false} 00:23:29.566 }, 00:23:29.566 "method": "bdev_nvme_attach_controller" 00:23:29.566 } 00:23:29.566 EOF 00:23:29.566 )") 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.566 { 00:23:29.566 "params": { 00:23:29.566 "name": "Nvme$subsystem", 00:23:29.566 "trtype": "$TEST_TRANSPORT", 00:23:29.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.566 "adrfam": "ipv4", 00:23:29.566 "trsvcid": "$NVMF_PORT", 00:23:29.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.566 "hdgst": ${hdgst:-false}, 00:23:29.566 "ddgst": ${ddgst:-false} 00:23:29.566 }, 00:23:29.566 "method": "bdev_nvme_attach_controller" 00:23:29.566 } 00:23:29.566 EOF 00:23:29.566 )") 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:29.566 { 00:23:29.566 "params": { 00:23:29.566 "name": "Nvme$subsystem", 00:23:29.566 "trtype": "$TEST_TRANSPORT", 00:23:29.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.566 "adrfam": "ipv4", 00:23:29.566 "trsvcid": "$NVMF_PORT", 00:23:29.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.566 "hdgst": ${hdgst:-false}, 00:23:29.566 "ddgst": ${ddgst:-false} 00:23:29.566 }, 00:23:29.566 "method": "bdev_nvme_attach_controller" 00:23:29.566 } 00:23:29.566 EOF 00:23:29.566 )") 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:29.566 13:27:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:29.566 "params": { 00:23:29.566 "name": "Nvme1", 00:23:29.566 "trtype": "tcp", 00:23:29.566 "traddr": "10.0.0.2", 00:23:29.566 "adrfam": "ipv4", 00:23:29.566 "trsvcid": "4420", 00:23:29.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme2", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme3", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme4", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme5", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme6", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme7", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme8", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme9", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 },{ 00:23:29.567 "params": { 00:23:29.567 "name": "Nvme10", 00:23:29.567 "trtype": "tcp", 00:23:29.567 "traddr": "10.0.0.2", 00:23:29.567 "adrfam": "ipv4", 00:23:29.567 "trsvcid": "4420", 00:23:29.567 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:29.567 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:29.567 "hdgst": false, 00:23:29.567 "ddgst": false 00:23:29.567 }, 00:23:29.567 "method": "bdev_nvme_attach_controller" 00:23:29.567 }' 00:23:29.567 [2024-12-05 13:27:51.964965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.567 [2024-12-05 13:27:52.001073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.949 Running I/O for 1 seconds... 00:23:32.154 1858.00 IOPS, 116.12 MiB/s 00:23:32.154 Latency(us) 00:23:32.154 [2024-12-05T12:27:54.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.154 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.154 Nvme1n1 : 1.15 222.11 13.88 0.00 0.00 284669.87 16274.77 246415.36 00:23:32.154 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.154 Nvme2n1 : 1.06 185.57 11.60 0.00 0.00 325796.12 5898.24 283115.52 00:23:32.154 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.154 Nvme3n1 : 1.16 274.89 17.18 0.00 0.00 222712.75 10431.15 255153.49 00:23:32.154 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.154 Nvme4n1 : 1.09 239.48 14.97 0.00 0.00 243816.43 7809.71 218453.33 00:23:32.154 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.154 Nvme5n1 : 1.10 233.15 14.57 0.00 0.00 252135.47 16820.91 279620.27 00:23:32.154 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.154 Nvme6n1 : 1.12 227.56 14.22 0.00 0.00 254032.64 17803.95 248162.99 00:23:32.154 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.154 Nvme7n1 : 1.17 273.26 17.08 0.00 0.00 208575.83 19660.80 244667.73 00:23:32.154 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.154 Verification LBA range: start 0x0 length 0x400 00:23:32.155 Nvme8n1 : 1.18 270.67 16.92 0.00 0.00 206894.76 12069.55 246415.36 00:23:32.155 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.155 Verification LBA range: start 0x0 length 0x400 00:23:32.155 Nvme9n1 : 1.17 224.67 14.04 0.00 0.00 243144.55 2170.88 255153.49 00:23:32.155 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:32.155 Verification LBA range: start 0x0 length 0x400 00:23:32.155 Nvme10n1 : 1.18 277.99 17.37 0.00 0.00 193322.65 1181.01 244667.73 00:23:32.155 [2024-12-05T12:27:54.723Z] =================================================================================================================== 00:23:32.155 [2024-12-05T12:27:54.723Z] Total : 2429.34 151.83 0.00 0.00 238332.24 1181.01 283115.52 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.155 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.155 rmmod nvme_tcp 00:23:32.416 rmmod nvme_fabrics 00:23:32.416 rmmod nvme_keyring 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 999208 ']' 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 999208 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 999208 ']' 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 999208 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 999208 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 999208' 00:23:32.417 killing process with pid 999208 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 999208 00:23:32.417 13:27:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 999208 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.678 13:27:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.591 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.591 00:23:34.591 real 0m17.300s 00:23:34.591 user 0m34.555s 00:23:34.591 sys 0m7.228s 00:23:34.591 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.591 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:34.591 ************************************ 00:23:34.591 END TEST nvmf_shutdown_tc1 00:23:34.591 ************************************ 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.852 ************************************ 00:23:34.852 START TEST nvmf_shutdown_tc2 00:23:34.852 ************************************ 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.852 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:34.853 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:34.853 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:34.853 Found net devices under 0000:31:00.0: cvl_0_0 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:34.853 Found net devices under 0000:31:00.1: cvl_0_1 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.853 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.854 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.854 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.854 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:35.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.731 ms 00:23:35.114 00:23:35.114 --- 10.0.0.2 ping statistics --- 00:23:35.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.114 rtt min/avg/max/mdev = 0.731/0.731/0.731/0.000 ms 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:23:35.114 00:23:35.114 --- 10.0.0.1 ping statistics --- 00:23:35.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.114 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.114 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1001371 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1001371 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1001371 ']' 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.115 13:27:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.377 [2024-12-05 13:27:57.697290] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:35.377 [2024-12-05 13:27:57.697358] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.377 [2024-12-05 13:27:57.797355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.377 [2024-12-05 13:27:57.828127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.377 [2024-12-05 13:27:57.828156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.377 [2024-12-05 13:27:57.828162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.377 [2024-12-05 13:27:57.828167] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.377 [2024-12-05 13:27:57.828171] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.377 [2024-12-05 13:27:57.829639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.377 [2024-12-05 13:27:57.829794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.377 [2024-12-05 13:27:57.829927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:35.377 [2024-12-05 13:27:57.830094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.948 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.948 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:35.948 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.948 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.948 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.210 [2024-12-05 13:27:58.547753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.210 13:27:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.210 Malloc1 00:23:36.210 [2024-12-05 13:27:58.659484] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.210 Malloc2 00:23:36.210 Malloc3 00:23:36.210 Malloc4 00:23:36.472 Malloc5 00:23:36.472 Malloc6 00:23:36.472 Malloc7 00:23:36.472 Malloc8 00:23:36.472 Malloc9 00:23:36.472 Malloc10 00:23:36.473 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.473 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:36.473 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.473 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1001592 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1001592 /var/tmp/bdevperf.sock 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1001592 ']' 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.736 { 00:23:36.736 "params": { 00:23:36.736 "name": "Nvme$subsystem", 00:23:36.736 "trtype": "$TEST_TRANSPORT", 00:23:36.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.736 "adrfam": "ipv4", 00:23:36.736 "trsvcid": "$NVMF_PORT", 00:23:36.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.736 "hdgst": ${hdgst:-false}, 00:23:36.736 "ddgst": ${ddgst:-false} 00:23:36.736 }, 00:23:36.736 "method": "bdev_nvme_attach_controller" 00:23:36.736 } 00:23:36.736 EOF 00:23:36.736 )") 00:23:36.736 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 [2024-12-05 13:27:59.109585] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:36.737 [2024-12-05 13:27:59.109662] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1001592 ] 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:36.737 { 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme$subsystem", 00:23:36.737 "trtype": "$TEST_TRANSPORT", 00:23:36.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "$NVMF_PORT", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.737 "hdgst": ${hdgst:-false}, 00:23:36.737 "ddgst": ${ddgst:-false} 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 } 00:23:36.737 EOF 00:23:36.737 )") 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:36.737 13:27:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme1", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme2", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme3", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme4", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme5", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme6", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme7", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme8", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme9", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 },{ 00:23:36.737 "params": { 00:23:36.737 "name": "Nvme10", 00:23:36.737 "trtype": "tcp", 00:23:36.737 "traddr": "10.0.0.2", 00:23:36.737 "adrfam": "ipv4", 00:23:36.737 "trsvcid": "4420", 00:23:36.737 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:36.737 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:36.737 "hdgst": false, 00:23:36.737 "ddgst": false 00:23:36.737 }, 00:23:36.737 "method": "bdev_nvme_attach_controller" 00:23:36.737 }' 00:23:36.738 [2024-12-05 13:27:59.192504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.738 [2024-12-05 13:27:59.228761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.122 Running I/O for 10 seconds... 00:23:38.122 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.122 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:38.122 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:38.122 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.122 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:38.399 13:28:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:38.660 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:38.920 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:38.920 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:38.920 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:38.920 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:38.920 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.920 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1001592 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1001592 ']' 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1001592 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1001592 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1001592' 00:23:39.181 killing process with pid 1001592 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1001592 00:23:39.181 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1001592 00:23:39.181 Received shutdown signal, test time was about 0.988221 seconds 00:23:39.181 00:23:39.181 Latency(us) 00:23:39.181 [2024-12-05T12:28:01.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.181 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme1n1 : 0.97 262.67 16.42 0.00 0.00 240765.87 19770.03 267386.88 00:23:39.181 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme2n1 : 0.99 259.28 16.21 0.00 0.00 239113.81 16930.13 248162.99 00:23:39.181 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme3n1 : 0.98 261.81 16.36 0.00 0.00 232099.41 22391.47 228939.09 00:23:39.181 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme4n1 : 0.95 207.97 13.00 0.00 0.00 284057.01 3167.57 253405.87 00:23:39.181 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme5n1 : 0.95 201.23 12.58 0.00 0.00 288988.44 18677.76 255153.49 00:23:39.181 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme6n1 : 0.97 198.32 12.40 0.00 0.00 287231.15 16820.91 255153.49 00:23:39.181 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme7n1 : 0.98 260.25 16.27 0.00 0.00 214263.04 17148.59 258648.75 00:23:39.181 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme8n1 : 0.97 267.41 16.71 0.00 0.00 203292.51 2143.57 241172.48 00:23:39.181 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme9n1 : 0.96 265.33 16.58 0.00 0.00 200369.92 18896.21 249910.61 00:23:39.181 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.181 Verification LBA range: start 0x0 length 0x400 00:23:39.181 Nvme10n1 : 0.98 195.92 12.25 0.00 0.00 265906.06 20316.16 272629.76 00:23:39.181 [2024-12-05T12:28:01.749Z] =================================================================================================================== 00:23:39.181 [2024-12-05T12:28:01.749Z] Total : 2380.19 148.76 0.00 0.00 241659.53 2143.57 272629.76 00:23:39.441 13:28:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:40.417 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1001371 00:23:40.417 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:40.417 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:40.417 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.417 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:40.418 rmmod nvme_tcp 00:23:40.418 rmmod nvme_fabrics 00:23:40.418 rmmod nvme_keyring 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1001371 ']' 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1001371 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1001371 ']' 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1001371 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1001371 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1001371' 00:23:40.418 killing process with pid 1001371 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1001371 00:23:40.418 13:28:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1001371 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.677 13:28:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.221 00:23:43.221 real 0m8.038s 00:23:43.221 user 0m24.260s 00:23:43.221 sys 0m1.376s 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:43.221 ************************************ 00:23:43.221 END TEST nvmf_shutdown_tc2 00:23:43.221 ************************************ 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.221 ************************************ 00:23:43.221 START TEST nvmf_shutdown_tc3 00:23:43.221 ************************************ 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.221 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:43.222 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:43.222 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:43.222 Found net devices under 0000:31:00.0: cvl_0_0 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:43.222 Found net devices under 0000:31:00.1: cvl_0_1 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:23:43.222 00:23:43.222 --- 10.0.0.2 ping statistics --- 00:23:43.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.222 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:23:43.222 00:23:43.222 --- 10.0.0.1 ping statistics --- 00:23:43.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.222 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1002930 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1002930 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1002930 ']' 00:23:43.222 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.223 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:43.223 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.223 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:43.223 13:28:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.223 [2024-12-05 13:28:05.786741] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:43.482 [2024-12-05 13:28:05.786811] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.482 [2024-12-05 13:28:05.891587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:43.482 [2024-12-05 13:28:05.927232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.482 [2024-12-05 13:28:05.927266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.482 [2024-12-05 13:28:05.927272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.482 [2024-12-05 13:28:05.927277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.482 [2024-12-05 13:28:05.927281] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.482 [2024-12-05 13:28:05.928860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:43.482 [2024-12-05 13:28:05.929034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:43.482 [2024-12-05 13:28:05.929199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:43.482 [2024-12-05 13:28:05.929200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:44.052 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.052 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:44.052 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.052 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.052 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.346 [2024-12-05 13:28:06.637324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.346 13:28:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.346 Malloc1 00:23:44.346 [2024-12-05 13:28:06.744805] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.346 Malloc2 00:23:44.346 Malloc3 00:23:44.346 Malloc4 00:23:44.346 Malloc5 00:23:44.634 Malloc6 00:23:44.634 Malloc7 00:23:44.634 Malloc8 00:23:44.634 Malloc9 00:23:44.634 Malloc10 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1003311 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1003311 /var/tmp/bdevperf.sock 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1003311 ']' 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.634 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.634 { 00:23:44.634 "params": { 00:23:44.634 "name": "Nvme$subsystem", 00:23:44.634 "trtype": "$TEST_TRANSPORT", 00:23:44.634 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.634 "adrfam": "ipv4", 00:23:44.635 "trsvcid": "$NVMF_PORT", 00:23:44.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.635 "hdgst": ${hdgst:-false}, 00:23:44.635 "ddgst": ${ddgst:-false} 00:23:44.635 }, 00:23:44.635 "method": "bdev_nvme_attach_controller" 00:23:44.635 } 00:23:44.635 EOF 00:23:44.635 )") 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.635 { 00:23:44.635 "params": { 00:23:44.635 "name": "Nvme$subsystem", 00:23:44.635 "trtype": "$TEST_TRANSPORT", 00:23:44.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.635 "adrfam": "ipv4", 00:23:44.635 "trsvcid": "$NVMF_PORT", 00:23:44.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.635 "hdgst": ${hdgst:-false}, 00:23:44.635 "ddgst": ${ddgst:-false} 00:23:44.635 }, 00:23:44.635 "method": "bdev_nvme_attach_controller" 00:23:44.635 } 00:23:44.635 EOF 00:23:44.635 )") 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.635 { 00:23:44.635 "params": { 00:23:44.635 "name": "Nvme$subsystem", 00:23:44.635 "trtype": "$TEST_TRANSPORT", 00:23:44.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.635 "adrfam": "ipv4", 00:23:44.635 "trsvcid": "$NVMF_PORT", 00:23:44.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.635 "hdgst": ${hdgst:-false}, 00:23:44.635 "ddgst": ${ddgst:-false} 00:23:44.635 }, 00:23:44.635 "method": "bdev_nvme_attach_controller" 00:23:44.635 } 00:23:44.635 EOF 00:23:44.635 )") 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.635 { 00:23:44.635 "params": { 00:23:44.635 "name": "Nvme$subsystem", 00:23:44.635 "trtype": "$TEST_TRANSPORT", 00:23:44.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.635 "adrfam": "ipv4", 00:23:44.635 "trsvcid": "$NVMF_PORT", 00:23:44.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.635 "hdgst": ${hdgst:-false}, 00:23:44.635 "ddgst": ${ddgst:-false} 00:23:44.635 }, 00:23:44.635 "method": "bdev_nvme_attach_controller" 00:23:44.635 } 00:23:44.635 EOF 00:23:44.635 )") 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.635 { 00:23:44.635 "params": { 00:23:44.635 "name": "Nvme$subsystem", 00:23:44.635 "trtype": "$TEST_TRANSPORT", 00:23:44.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.635 "adrfam": "ipv4", 00:23:44.635 "trsvcid": "$NVMF_PORT", 00:23:44.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.635 "hdgst": ${hdgst:-false}, 00:23:44.635 "ddgst": ${ddgst:-false} 00:23:44.635 }, 00:23:44.635 "method": "bdev_nvme_attach_controller" 00:23:44.635 } 00:23:44.635 EOF 00:23:44.635 )") 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.635 { 00:23:44.635 "params": { 00:23:44.635 "name": "Nvme$subsystem", 00:23:44.635 "trtype": "$TEST_TRANSPORT", 00:23:44.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.635 "adrfam": "ipv4", 00:23:44.635 "trsvcid": "$NVMF_PORT", 00:23:44.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.635 "hdgst": ${hdgst:-false}, 00:23:44.635 "ddgst": ${ddgst:-false} 00:23:44.635 }, 00:23:44.635 "method": "bdev_nvme_attach_controller" 00:23:44.635 } 00:23:44.635 EOF 00:23:44.635 )") 00:23:44.635 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.939 [2024-12-05 13:28:07.198846] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:44.939 [2024-12-05 13:28:07.198905] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1003311 ] 00:23:44.939 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.939 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.939 { 00:23:44.939 "params": { 00:23:44.939 "name": "Nvme$subsystem", 00:23:44.939 "trtype": "$TEST_TRANSPORT", 00:23:44.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.939 "adrfam": "ipv4", 00:23:44.939 "trsvcid": "$NVMF_PORT", 00:23:44.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.939 "hdgst": ${hdgst:-false}, 00:23:44.939 "ddgst": ${ddgst:-false} 00:23:44.939 }, 00:23:44.939 "method": "bdev_nvme_attach_controller" 00:23:44.939 } 00:23:44.939 EOF 00:23:44.939 )") 00:23:44.939 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.939 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.939 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.939 { 00:23:44.939 "params": { 00:23:44.939 "name": "Nvme$subsystem", 00:23:44.939 "trtype": "$TEST_TRANSPORT", 00:23:44.939 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.939 "adrfam": "ipv4", 00:23:44.939 "trsvcid": "$NVMF_PORT", 00:23:44.939 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.939 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.939 "hdgst": ${hdgst:-false}, 00:23:44.939 "ddgst": ${ddgst:-false} 00:23:44.939 }, 00:23:44.939 "method": "bdev_nvme_attach_controller" 00:23:44.939 } 00:23:44.939 EOF 00:23:44.939 )") 00:23:44.939 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.939 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.940 { 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme$subsystem", 00:23:44.940 "trtype": "$TEST_TRANSPORT", 00:23:44.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "$NVMF_PORT", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.940 "hdgst": ${hdgst:-false}, 00:23:44.940 "ddgst": ${ddgst:-false} 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 } 00:23:44.940 EOF 00:23:44.940 )") 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:44.940 { 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme$subsystem", 00:23:44.940 "trtype": "$TEST_TRANSPORT", 00:23:44.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "$NVMF_PORT", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:44.940 "hdgst": ${hdgst:-false}, 00:23:44.940 "ddgst": ${ddgst:-false} 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 } 00:23:44.940 EOF 00:23:44.940 )") 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:44.940 13:28:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme1", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme2", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme3", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme4", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme5", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme6", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme7", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme8", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme9", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 },{ 00:23:44.940 "params": { 00:23:44.940 "name": "Nvme10", 00:23:44.940 "trtype": "tcp", 00:23:44.940 "traddr": "10.0.0.2", 00:23:44.940 "adrfam": "ipv4", 00:23:44.940 "trsvcid": "4420", 00:23:44.940 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:44.940 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:44.940 "hdgst": false, 00:23:44.940 "ddgst": false 00:23:44.940 }, 00:23:44.940 "method": "bdev_nvme_attach_controller" 00:23:44.940 }' 00:23:44.940 [2024-12-05 13:28:07.277585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.940 [2024-12-05 13:28:07.313719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.352 Running I/O for 10 seconds... 00:23:46.352 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:46.352 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:46.352 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:46.352 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.352 13:28:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.613 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.613 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:46.614 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:46.874 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1002930 00:23:47.134 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1002930 ']' 00:23:47.135 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1002930 00:23:47.135 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:47.135 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.135 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1002930 00:23:47.420 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.420 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.420 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1002930' 00:23:47.420 killing process with pid 1002930 00:23:47.420 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1002930 00:23:47.420 13:28:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1002930 00:23:47.420 [2024-12-05 13:28:09.767258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.420 [2024-12-05 13:28:09.767500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.767604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657540 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.768997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.769134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1990 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.770197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657a30 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.770209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657a30 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.770215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657a30 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.421 [2024-12-05 13:28:09.771498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.771687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x657f00 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.772998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773040] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6583f0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6588c0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6588c0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.773725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6588c0 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.774250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.774262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.774267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.774272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.774277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.774282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.422 [2024-12-05 13:28:09.774286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.774553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x658d90 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.423 [2024-12-05 13:28:09.775681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.775765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659260 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.776688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x659730 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.777203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.785403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.424 [2024-12-05 13:28:09.785443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.424 [2024-12-05 13:28:09.785454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.424 [2024-12-05 13:28:09.785461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.424 [2024-12-05 13:28:09.785469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.424 [2024-12-05 13:28:09.785477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.424 [2024-12-05 13:28:09.785485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.424 [2024-12-05 13:28:09.785492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.424 [2024-12-05 13:28:09.785500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb15b10 is same with the state(6) to be set 00:23:47.424 [2024-12-05 13:28:09.785530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.424 [2024-12-05 13:28:09.785544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.424 [2024-12-05 13:28:09.785557] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.424 [2024-12-05 13:28:09.785571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.424 [2024-12-05 13:28:09.785580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22430 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.785635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf49910 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.785726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a7a0 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.785813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9ea80 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.785917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.785978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.785990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa41610 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.786015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53f10 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.786101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb238b0 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.786185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.425 [2024-12-05 13:28:09.786246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22230 is same with the state(6) to be set 00:23:47.425 [2024-12-05 13:28:09.786647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.425 [2024-12-05 13:28:09.786965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.425 [2024-12-05 13:28:09.786975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.786982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.786991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.786998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with t[2024-12-05 13:28:09.787640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:12he state(6) to be set 00:23:47.426 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with t[2024-12-05 13:28:09.787662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:12he state(6) to be set 00:23:47.426 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-05 13:28:09.787691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 he state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with t[2024-12-05 13:28:09.787709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.426 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.426 [2024-12-05 13:28:09.787716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with t[2024-12-05 13:28:09.787721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12he state(6) to be set 00:23:47.426 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.426 [2024-12-05 13:28:09.787728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.426 [2024-12-05 13:28:09.787731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.787733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.787743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with t[2024-12-05 13:28:09.787748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:47.427 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.787757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ [2024-12-05 13:28:09.787781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with ttransport error -6 (No such device or address) on qpair id 1 00:23:47.427 he state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.787900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c1d70 is same with the state(6) to be set 00:23:47.427 [2024-12-05 13:28:09.790831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.790873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.790893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.790913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.790933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.790952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.790972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.790989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.790997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.427 [2024-12-05 13:28:09.791334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.427 [2024-12-05 13:28:09.791343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.791843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.791850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.799414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.799448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.799464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.799472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.799482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.799490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.799499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.799507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.799516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.428 [2024-12-05 13:28:09.799524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.799853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:47.428 [2024-12-05 13:28:09.799918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22430 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.799956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb15b10 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.799976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf49910 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.799996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4a7a0 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.800014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9ea80 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.800050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.428 [2024-12-05 13:28:09.800061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.800070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.428 [2024-12-05 13:28:09.800077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.800086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.428 [2024-12-05 13:28:09.800093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.800101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:47.428 [2024-12-05 13:28:09.800109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.428 [2024-12-05 13:28:09.800116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e8a0 is same with the state(6) to be set 00:23:47.428 [2024-12-05 13:28:09.800134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa41610 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.800152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf53f10 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.800166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb238b0 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.800187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22230 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.801801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:47.428 [2024-12-05 13:28:09.802291] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.802344] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.802387] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.802465] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.802705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.428 [2024-12-05 13:28:09.802722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb22430 with addr=10.0.0.2, port=4420 00:23:47.428 [2024-12-05 13:28:09.802731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22430 is same with the state(6) to be set 00:23:47.428 [2024-12-05 13:28:09.803215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.428 [2024-12-05 13:28:09.803255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf53f10 with addr=10.0.0.2, port=4420 00:23:47.428 [2024-12-05 13:28:09.803267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53f10 is same with the state(6) to be set 00:23:47.428 [2024-12-05 13:28:09.803321] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.803381] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.803423] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.803759] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:47.428 [2024-12-05 13:28:09.803788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22430 (9): Bad file descriptor 00:23:47.428 [2024-12-05 13:28:09.803801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf53f10 (9): Bad file descriptor 00:23:47.429 [2024-12-05 13:28:09.803898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:47.429 [2024-12-05 13:28:09.803910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:47.429 [2024-12-05 13:28:09.803919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:47.429 [2024-12-05 13:28:09.803929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:47.429 [2024-12-05 13:28:09.803938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:47.429 [2024-12-05 13:28:09.803944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:47.429 [2024-12-05 13:28:09.803951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:47.429 [2024-12-05 13:28:09.803958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:47.429 [2024-12-05 13:28:09.809911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9e8a0 (9): Bad file descriptor 00:23:47.429 [2024-12-05 13:28:09.810058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.429 [2024-12-05 13:28:09.810889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.429 [2024-12-05 13:28:09.810897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.810906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.810914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.810924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.810931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.810941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.810948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.810957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.810964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.810976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.810984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.810993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.811168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.811177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd2fc40 is same with the state(6) to be set 00:23:47.430 [2024-12-05 13:28:09.812466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.812990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.812997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.813006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.430 [2024-12-05 13:28:09.813014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.430 [2024-12-05 13:28:09.813023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.813577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.813585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104d870 is same with the state(6) to be set 00:23:47.431 [2024-12-05 13:28:09.814877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.814892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.814904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.814912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.814921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.814929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.814938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.814945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.814955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.814962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.814972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.814979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.814989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.814996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.431 [2024-12-05 13:28:09.815177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.431 [2024-12-05 13:28:09.815184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.432 [2024-12-05 13:28:09.815865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.432 [2024-12-05 13:28:09.815875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.815882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.815891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.815899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.815908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.815916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.815927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.815934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.815944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.815951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.815961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.815968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.815976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf314b0 is same with the state(6) to be set 00:23:47.433 [2024-12-05 13:28:09.817245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.433 [2024-12-05 13:28:09.817785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.433 [2024-12-05 13:28:09.817792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.817989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.817999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.818348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.818358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104da50 is same with the state(6) to be set 00:23:47.434 [2024-12-05 13:28:09.819710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.819723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.819735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.434 [2024-12-05 13:28:09.819742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.434 [2024-12-05 13:28:09.819753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.819988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.819998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.435 [2024-12-05 13:28:09.820407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.435 [2024-12-05 13:28:09.820414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.820802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.820810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104ed10 is same with the state(6) to be set 00:23:47.436 [2024-12-05 13:28:09.822091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.436 [2024-12-05 13:28:09.822346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.436 [2024-12-05 13:28:09.822355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.822989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.822996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.437 [2024-12-05 13:28:09.823006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.437 [2024-12-05 13:28:09.823013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.823203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.823212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104ffd0 is same with the state(6) to be set 00:23:47.438 [2024-12-05 13:28:09.824500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.438 [2024-12-05 13:28:09.824966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.438 [2024-12-05 13:28:09.824975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.824982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.824992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.439 [2024-12-05 13:28:09.825561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.439 [2024-12-05 13:28:09.825568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.825578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.825585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.825595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.825602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.825611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1046650 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.826878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.826897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.826911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.826925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.827003] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:47.440 [2024-12-05 13:28:09.827017] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:47.440 [2024-12-05 13:28:09.827036] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:47.440 [2024-12-05 13:28:09.827125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.827139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.827151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.827600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.440 [2024-12-05 13:28:09.827615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb15b10 with addr=10.0.0.2, port=4420 00:23:47.440 [2024-12-05 13:28:09.827625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb15b10 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.828080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.440 [2024-12-05 13:28:09.828121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb238b0 with addr=10.0.0.2, port=4420 00:23:47.440 [2024-12-05 13:28:09.828133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb238b0 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.828512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.440 [2024-12-05 13:28:09.828523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb22230 with addr=10.0.0.2, port=4420 00:23:47.440 [2024-12-05 13:28:09.828531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22230 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.829096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.440 [2024-12-05 13:28:09.829133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf49910 with addr=10.0.0.2, port=4420 00:23:47.440 [2024-12-05 13:28:09.829145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf49910 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.831042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.831061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:47.440 [2024-12-05 13:28:09.831416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.440 [2024-12-05 13:28:09.831431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa41610 with addr=10.0.0.2, port=4420 00:23:47.440 [2024-12-05 13:28:09.831439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa41610 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.831788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.440 [2024-12-05 13:28:09.831798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4a7a0 with addr=10.0.0.2, port=4420 00:23:47.440 [2024-12-05 13:28:09.831806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a7a0 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.832133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.440 [2024-12-05 13:28:09.832144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9ea80 with addr=10.0.0.2, port=4420 00:23:47.440 [2024-12-05 13:28:09.832151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9ea80 is same with the state(6) to be set 00:23:47.440 [2024-12-05 13:28:09.832162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb15b10 (9): Bad file descriptor 00:23:47.440 [2024-12-05 13:28:09.832179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb238b0 (9): Bad file descriptor 00:23:47.440 [2024-12-05 13:28:09.832188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22230 (9): Bad file descriptor 00:23:47.440 [2024-12-05 13:28:09.832197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf49910 (9): Bad file descriptor 00:23:47.440 [2024-12-05 13:28:09.832298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.440 [2024-12-05 13:28:09.832683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.440 [2024-12-05 13:28:09.832691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.832985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.832995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:47.441 [2024-12-05 13:28:09.833396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:47.441 [2024-12-05 13:28:09.833404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051290 is same with the state(6) to be set 00:23:47.441 task offset: 30592 on job bdev=Nvme2n1 fails 00:23:47.441 00:23:47.441 Latency(us) 00:23:47.442 [2024-12-05T12:28:10.010Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.442 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme1n1 ended in about 0.98 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme1n1 : 0.98 196.30 12.27 65.43 0.00 241794.56 16930.13 244667.73 00:23:47.442 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme2n1 ended in about 0.96 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme2n1 : 0.96 200.74 12.55 66.91 0.00 231560.56 3276.80 244667.73 00:23:47.442 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme3n1 ended in about 0.98 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme3n1 : 0.98 195.82 12.24 65.27 0.00 232721.28 12397.23 249910.61 00:23:47.442 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme4n1 ended in about 0.98 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme4n1 : 0.98 195.35 12.21 65.12 0.00 228437.55 18240.85 237677.23 00:23:47.442 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme5n1 ended in about 0.97 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme5n1 : 0.97 198.51 12.41 66.17 0.00 219783.04 11304.96 249910.61 00:23:47.442 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme6n1 ended in about 0.99 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme6n1 : 0.99 140.07 8.75 64.96 0.00 278187.34 18896.21 270882.13 00:23:47.442 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme7n1 ended in about 0.99 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme7n1 : 0.99 194.39 12.15 64.80 0.00 215166.08 21408.43 248162.99 00:23:47.442 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme8n1 ended in about 0.99 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme8n1 : 0.99 196.95 12.31 64.64 0.00 208466.26 18240.85 242920.11 00:23:47.442 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme9n1 ended in about 1.00 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme9n1 : 1.00 127.97 8.00 63.98 0.00 278275.41 16602.45 260396.37 00:23:47.442 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:47.442 Job: Nvme10n1 ended in about 0.99 seconds with error 00:23:47.442 Verification LBA range: start 0x0 length 0x400 00:23:47.442 Nvme10n1 : 0.99 128.97 8.06 64.48 0.00 269236.91 19879.25 265639.25 00:23:47.442 [2024-12-05T12:28:10.010Z] =================================================================================================================== 00:23:47.442 [2024-12-05T12:28:10.010Z] Total : 1775.06 110.94 651.77 0.00 237669.68 3276.80 270882.13 00:23:47.442 [2024-12-05 13:28:09.863466] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:47.442 [2024-12-05 13:28:09.863522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.863940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.442 [2024-12-05 13:28:09.863962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf53f10 with addr=10.0.0.2, port=4420 00:23:47.442 [2024-12-05 13:28:09.863973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf53f10 is same with the state(6) to be set 00:23:47.442 [2024-12-05 13:28:09.864307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.442 [2024-12-05 13:28:09.864318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb22430 with addr=10.0.0.2, port=4420 00:23:47.442 [2024-12-05 13:28:09.864325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22430 is same with the state(6) to be set 00:23:47.442 [2024-12-05 13:28:09.864338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa41610 (9): Bad file descriptor 00:23:47.442 [2024-12-05 13:28:09.864351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4a7a0 (9): Bad file descriptor 00:23:47.442 [2024-12-05 13:28:09.864360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9ea80 (9): Bad file descriptor 00:23:47.442 [2024-12-05 13:28:09.864370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.864377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.864386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.864396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.864405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.864411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.864418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.864425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.864432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.864438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.864446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.864452] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.864460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.864466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.864473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.864479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.864845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.442 [2024-12-05 13:28:09.864861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9e8a0 with addr=10.0.0.2, port=4420 00:23:47.442 [2024-12-05 13:28:09.864874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9e8a0 is same with the state(6) to be set 00:23:47.442 [2024-12-05 13:28:09.864884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf53f10 (9): Bad file descriptor 00:23:47.442 [2024-12-05 13:28:09.864894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22430 (9): Bad file descriptor 00:23:47.442 [2024-12-05 13:28:09.864902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.864909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.864915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.864923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.864930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.864937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.864944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.864950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.864957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.864964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.864971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.864977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.865040] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:47.442 [2024-12-05 13:28:09.865053] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:47.442 [2024-12-05 13:28:09.865399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9e8a0 (9): Bad file descriptor 00:23:47.442 [2024-12-05 13:28:09.865411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.865418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.865425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.865432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.865440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.865446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.865453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.865459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:47.442 [2024-12-05 13:28:09.865502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.865513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.865522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.865531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.865540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.865549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.865558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:47.442 [2024-12-05 13:28:09.865607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:47.442 [2024-12-05 13:28:09.865614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:47.442 [2024-12-05 13:28:09.865621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:47.442 [2024-12-05 13:28:09.865627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:47.443 [2024-12-05 13:28:09.866005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.443 [2024-12-05 13:28:09.866018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf49910 with addr=10.0.0.2, port=4420 00:23:47.443 [2024-12-05 13:28:09.866026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf49910 is same with the state(6) to be set 00:23:47.443 [2024-12-05 13:28:09.866357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.443 [2024-12-05 13:28:09.866367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb22230 with addr=10.0.0.2, port=4420 00:23:47.443 [2024-12-05 13:28:09.866374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb22230 is same with the state(6) to be set 00:23:47.443 [2024-12-05 13:28:09.866679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.443 [2024-12-05 13:28:09.866690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb238b0 with addr=10.0.0.2, port=4420 00:23:47.443 [2024-12-05 13:28:09.866697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb238b0 is same with the state(6) to be set 00:23:47.443 [2024-12-05 13:28:09.866865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.443 [2024-12-05 13:28:09.866875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb15b10 with addr=10.0.0.2, port=4420 00:23:47.443 [2024-12-05 13:28:09.866882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb15b10 is same with the state(6) to be set 00:23:47.443 [2024-12-05 13:28:09.867275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.443 [2024-12-05 13:28:09.867285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf9ea80 with addr=10.0.0.2, port=4420 00:23:47.443 [2024-12-05 13:28:09.867292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9ea80 is same with the state(6) to be set 00:23:47.443 [2024-12-05 13:28:09.867642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.443 [2024-12-05 13:28:09.867652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf4a7a0 with addr=10.0.0.2, port=4420 00:23:47.443 [2024-12-05 13:28:09.867660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4a7a0 is same with the state(6) to be set 00:23:47.443 [2024-12-05 13:28:09.867785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:47.443 [2024-12-05 13:28:09.867794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa41610 with addr=10.0.0.2, port=4420 00:23:47.443 [2024-12-05 13:28:09.867804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa41610 is same with the state(6) to be set 00:23:47.443 [2024-12-05 13:28:09.867837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf49910 (9): Bad file descriptor 00:23:47.443 [2024-12-05 13:28:09.867848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb22230 (9): Bad file descriptor 00:23:47.443 [2024-12-05 13:28:09.867857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb238b0 (9): Bad file descriptor 00:23:47.443 [2024-12-05 13:28:09.867871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb15b10 (9): Bad file descriptor 00:23:47.443 [2024-12-05 13:28:09.867880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf9ea80 (9): Bad file descriptor 00:23:47.443 [2024-12-05 13:28:09.867889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4a7a0 (9): Bad file descriptor 00:23:47.443 [2024-12-05 13:28:09.867897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa41610 (9): Bad file descriptor 00:23:47.443 [2024-12-05 13:28:09.867922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:47.443 [2024-12-05 13:28:09.867930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:47.443 [2024-12-05 13:28:09.867937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:47.443 [2024-12-05 13:28:09.867944] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:47.443 [2024-12-05 13:28:09.867951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:47.443 [2024-12-05 13:28:09.867957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:47.443 [2024-12-05 13:28:09.867964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:47.443 [2024-12-05 13:28:09.867971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:47.443 [2024-12-05 13:28:09.867977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:47.443 [2024-12-05 13:28:09.867984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:47.443 [2024-12-05 13:28:09.867991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:47.443 [2024-12-05 13:28:09.867997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:47.443 [2024-12-05 13:28:09.868004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:47.443 [2024-12-05 13:28:09.868011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:47.443 [2024-12-05 13:28:09.868018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:47.443 [2024-12-05 13:28:09.868024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:47.443 [2024-12-05 13:28:09.868031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:47.443 [2024-12-05 13:28:09.868037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:47.443 [2024-12-05 13:28:09.868044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:47.443 [2024-12-05 13:28:09.868050] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:47.443 [2024-12-05 13:28:09.868059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:47.443 [2024-12-05 13:28:09.868066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:47.443 [2024-12-05 13:28:09.868072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:47.443 [2024-12-05 13:28:09.868079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:47.443 [2024-12-05 13:28:09.868086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:47.443 [2024-12-05 13:28:09.868092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:47.443 [2024-12-05 13:28:09.868099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:47.443 [2024-12-05 13:28:09.868105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:47.706 13:28:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1003311 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1003311 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1003311 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:48.650 rmmod nvme_tcp 00:23:48.650 rmmod nvme_fabrics 00:23:48.650 rmmod nvme_keyring 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1002930 ']' 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1002930 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1002930 ']' 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1002930 00:23:48.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1002930) - No such process 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1002930 is not found' 00:23:48.650 Process with pid 1002930 is not found 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.650 13:28:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:51.195 00:23:51.195 real 0m7.875s 00:23:51.195 user 0m19.338s 00:23:51.195 sys 0m1.292s 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:51.195 ************************************ 00:23:51.195 END TEST nvmf_shutdown_tc3 00:23:51.195 ************************************ 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:51.195 ************************************ 00:23:51.195 START TEST nvmf_shutdown_tc4 00:23:51.195 ************************************ 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:51.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:51.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:51.195 Found net devices under 0000:31:00.0: cvl_0_0 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:51.195 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:51.196 Found net devices under 0000:31:00.1: cvl_0_1 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:51.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:23:51.196 00:23:51.196 --- 10.0.0.2 ping statistics --- 00:23:51.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.196 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:23:51.196 00:23:51.196 --- 10.0.0.1 ping statistics --- 00:23:51.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.196 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1004776 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1004776 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1004776 ']' 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.196 13:28:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:51.196 [2024-12-05 13:28:13.757676] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:23:51.196 [2024-12-05 13:28:13.757741] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.456 [2024-12-05 13:28:13.860481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:51.456 [2024-12-05 13:28:13.893938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.456 [2024-12-05 13:28:13.893970] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.456 [2024-12-05 13:28:13.893976] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.456 [2024-12-05 13:28:13.893981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.456 [2024-12-05 13:28:13.893985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.456 [2024-12-05 13:28:13.895284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.456 [2024-12-05 13:28:13.895444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.456 [2024-12-05 13:28:13.895601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.456 [2024-12-05 13:28:13.895603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:52.027 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.027 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:52.027 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.027 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.027 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.287 [2024-12-05 13:28:14.611142] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.287 13:28:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.287 Malloc1 00:23:52.287 [2024-12-05 13:28:14.719411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.287 Malloc2 00:23:52.287 Malloc3 00:23:52.287 Malloc4 00:23:52.287 Malloc5 00:23:52.548 Malloc6 00:23:52.548 Malloc7 00:23:52.548 Malloc8 00:23:52.548 Malloc9 00:23:52.548 Malloc10 00:23:52.548 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.548 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:52.548 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.548 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.808 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1005055 00:23:52.808 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:52.808 13:28:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:52.808 [2024-12-05 13:28:15.180744] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1004776 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1004776 ']' 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1004776 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1004776 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1004776' 00:23:58.110 killing process with pid 1004776 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1004776 00:23:58.110 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1004776 00:23:58.110 [2024-12-05 13:28:20.195187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10825b0 is same with the state(6) to be set 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 starting I/O failed: -6 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 starting I/O failed: -6 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 starting I/O failed: -6 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 [2024-12-05 13:28:20.195507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082f50 is same with the state(6) to be set 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 [2024-12-05 13:28:20.195531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082f50 is same with the state(6) to be set 00:23:58.110 [2024-12-05 13:28:20.195537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082f50 is same with the state(6) to be set 00:23:58.110 Write completed with error (sct=0, sc=8) 00:23:58.110 [2024-12-05 13:28:20.195542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1082f50 is same with the state(6) to be set 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 [2024-12-05 13:28:20.195783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10820e0 is same with Write completed with error (sct=0, sc=8) 00:23:58.111 the state(6) to be set 00:23:58.111 [2024-12-05 13:28:20.195804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10820e0 is same with the state(6) to be set 00:23:58.111 [2024-12-05 13:28:20.195810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10820e0 is same with the state(6) to be set 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 [2024-12-05 13:28:20.195815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10820e0 is same with the state(6) to be set 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 [2024-12-05 13:28:20.195901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 [2024-12-05 13:28:20.196948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 [2024-12-05 13:28:20.198285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.111 [2024-12-05 13:28:20.198405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109d580 is same with the state(6) to be set 00:23:58.111 [2024-12-05 13:28:20.198423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109d580 is same with the state(6) to be set 00:23:58.111 [2024-12-05 13:28:20.198428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109d580 is same with the state(6) to be set 00:23:58.111 [2024-12-05 13:28:20.198434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x109d580 is same with the state(6) to be set 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.111 starting I/O failed: -6 00:23:58.111 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 [2024-12-05 13:28:20.200105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.112 NVMe io qpair process completion error 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 [2024-12-05 13:28:20.201340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 [2024-12-05 13:28:20.201895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9c50 is same with tstarting I/O failed: -6 00:23:58.112 he state(6) to be set 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 [2024-12-05 13:28:20.201914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9c50 is same with the state(6) to be set 00:23:58.112 [2024-12-05 13:28:20.201920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff9c50 is same with the state(6) to be set 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 [2024-12-05 13:28:20.202163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 starting I/O failed: -6 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.112 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 [2024-12-05 13:28:20.203101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 [2024-12-05 13:28:20.204667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.113 NVMe io qpair process completion error 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 starting I/O failed: -6 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.113 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 [2024-12-05 13:28:20.205897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 [2024-12-05 13:28:20.206783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 [2024-12-05 13:28:20.207956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.114 Write completed with error (sct=0, sc=8) 00:23:58.114 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 [2024-12-05 13:28:20.212397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.115 NVMe io qpair process completion error 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 [2024-12-05 13:28:20.213836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 [2024-12-05 13:28:20.214679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.115 Write completed with error (sct=0, sc=8) 00:23:58.115 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 [2024-12-05 13:28:20.215620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 [2024-12-05 13:28:20.217073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.116 NVMe io qpair process completion error 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 starting I/O failed: -6 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.116 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 [2024-12-05 13:28:20.218066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 [2024-12-05 13:28:20.218947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 [2024-12-05 13:28:20.219896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.117 Write completed with error (sct=0, sc=8) 00:23:58.117 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 [2024-12-05 13:28:20.221361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.118 NVMe io qpair process completion error 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 [2024-12-05 13:28:20.222477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.118 starting I/O failed: -6 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 [2024-12-05 13:28:20.223316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.118 Write completed with error (sct=0, sc=8) 00:23:58.118 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 [2024-12-05 13:28:20.224254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.119 starting I/O failed: -6 00:23:58.119 starting I/O failed: -6 00:23:58.119 starting I/O failed: -6 00:23:58.119 starting I/O failed: -6 00:23:58.119 starting I/O failed: -6 00:23:58.119 starting I/O failed: -6 00:23:58.119 starting I/O failed: -6 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 [2024-12-05 13:28:20.227299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.119 NVMe io qpair process completion error 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.119 starting I/O failed: -6 00:23:58.119 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 [2024-12-05 13:28:20.228648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.120 starting I/O failed: -6 00:23:58.120 starting I/O failed: -6 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 [2024-12-05 13:28:20.229602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 [2024-12-05 13:28:20.230513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.120 Write completed with error (sct=0, sc=8) 00:23:58.120 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 [2024-12-05 13:28:20.232186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.121 NVMe io qpair process completion error 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 [2024-12-05 13:28:20.233271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.121 starting I/O failed: -6 00:23:58.121 starting I/O failed: -6 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 [2024-12-05 13:28:20.234247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.121 starting I/O failed: -6 00:23:58.121 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 [2024-12-05 13:28:20.235179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 [2024-12-05 13:28:20.237937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.122 NVMe io qpair process completion error 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 starting I/O failed: -6 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.122 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 [2024-12-05 13:28:20.239224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 [2024-12-05 13:28:20.240048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 [2024-12-05 13:28:20.240976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.123 Write completed with error (sct=0, sc=8) 00:23:58.123 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 [2024-12-05 13:28:20.242619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.124 NVMe io qpair process completion error 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 [2024-12-05 13:28:20.243728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 [2024-12-05 13:28:20.244542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 starting I/O failed: -6 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.124 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 [2024-12-05 13:28:20.245468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 Write completed with error (sct=0, sc=8) 00:23:58.125 starting I/O failed: -6 00:23:58.125 [2024-12-05 13:28:20.249413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.125 NVMe io qpair process completion error 00:23:58.125 Initializing NVMe Controllers 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.126 Controller IO queue size 128, less than required. 00:23:58.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:58.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.126 Initialization complete. Launching workers. 00:23:58.126 ======================================================== 00:23:58.126 Latency(us) 00:23:58.126 Device Information : IOPS MiB/s Average min max 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1908.82 82.02 67078.43 629.36 125393.88 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1885.84 81.03 67912.67 641.15 123463.96 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1883.92 80.95 68004.44 660.30 154384.40 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1902.86 81.76 67368.24 669.03 131338.63 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1874.34 80.54 68416.39 700.12 122820.77 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1888.82 81.16 67931.13 872.42 135934.45 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1879.66 80.77 68283.06 829.10 117714.19 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1897.33 81.53 66927.39 621.48 123251.72 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1859.02 79.88 68331.50 640.99 122992.15 00:23:58.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1885.84 81.03 67387.91 606.09 124333.51 00:23:58.126 ======================================================== 00:23:58.126 Total : 18866.44 810.67 67761.04 606.09 154384.40 00:23:58.126 00:23:58.126 [2024-12-05 13:28:20.254157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94060 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b94390 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95050 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b95380 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b956b0 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b946c0 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b959e0 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96540 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b949f0 is same with the state(6) to be set 00:23:58.126 [2024-12-05 13:28:20.254448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b96360 is same with the state(6) to be set 00:23:58.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:58.126 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1005055 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1005055 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1005055 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.067 rmmod nvme_tcp 00:23:59.067 rmmod nvme_fabrics 00:23:59.067 rmmod nvme_keyring 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1004776 ']' 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1004776 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1004776 ']' 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1004776 00:23:59.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1004776) - No such process 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1004776 is not found' 00:23:59.067 Process with pid 1004776 is not found 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.067 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:01.608 00:24:01.608 real 0m10.303s 00:24:01.608 user 0m27.877s 00:24:01.608 sys 0m4.110s 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:01.608 ************************************ 00:24:01.608 END TEST nvmf_shutdown_tc4 00:24:01.608 ************************************ 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:01.608 00:24:01.608 real 0m44.098s 00:24:01.608 user 1m46.292s 00:24:01.608 sys 0m14.361s 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:01.608 ************************************ 00:24:01.608 END TEST nvmf_shutdown 00:24:01.608 ************************************ 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:01.608 ************************************ 00:24:01.608 START TEST nvmf_nsid 00:24:01.608 ************************************ 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:01.608 * Looking for test storage... 00:24:01.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:01.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.608 --rc genhtml_branch_coverage=1 00:24:01.608 --rc genhtml_function_coverage=1 00:24:01.608 --rc genhtml_legend=1 00:24:01.608 --rc geninfo_all_blocks=1 00:24:01.608 --rc geninfo_unexecuted_blocks=1 00:24:01.608 00:24:01.608 ' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:01.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.608 --rc genhtml_branch_coverage=1 00:24:01.608 --rc genhtml_function_coverage=1 00:24:01.608 --rc genhtml_legend=1 00:24:01.608 --rc geninfo_all_blocks=1 00:24:01.608 --rc geninfo_unexecuted_blocks=1 00:24:01.608 00:24:01.608 ' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:01.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.608 --rc genhtml_branch_coverage=1 00:24:01.608 --rc genhtml_function_coverage=1 00:24:01.608 --rc genhtml_legend=1 00:24:01.608 --rc geninfo_all_blocks=1 00:24:01.608 --rc geninfo_unexecuted_blocks=1 00:24:01.608 00:24:01.608 ' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:01.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.608 --rc genhtml_branch_coverage=1 00:24:01.608 --rc genhtml_function_coverage=1 00:24:01.608 --rc genhtml_legend=1 00:24:01.608 --rc geninfo_all_blocks=1 00:24:01.608 --rc geninfo_unexecuted_blocks=1 00:24:01.608 00:24:01.608 ' 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.608 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:01.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:01.609 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:09.744 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:09.744 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:09.744 Found net devices under 0000:31:00.0: cvl_0_0 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:09.744 Found net devices under 0000:31:00.1: cvl_0_1 00:24:09.744 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.745 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:10.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:24:10.006 00:24:10.006 --- 10.0.0.2 ping statistics --- 00:24:10.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.006 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:24:10.006 00:24:10.006 --- 10.0.0.1 ping statistics --- 00:24:10.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.006 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1010895 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1010895 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1010895 ']' 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:10.006 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.006 [2024-12-05 13:28:32.544034] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:24:10.006 [2024-12-05 13:28:32.544102] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.266 [2024-12-05 13:28:32.634400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.266 [2024-12-05 13:28:32.675219] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.266 [2024-12-05 13:28:32.675258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.266 [2024-12-05 13:28:32.675266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.266 [2024-12-05 13:28:32.675273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.266 [2024-12-05 13:28:32.675278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.266 [2024-12-05 13:28:32.675886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1011219 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=283654b9-fcbc-4361-bb67-853ed718b3a2 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=908963d7-e26d-4af1-a5a0-d094618e152b 00:24:10.836 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:11.097 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=d4a35bae-1a24-45eb-92fd-1d3487893d9b 00:24:11.097 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:11.097 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.097 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:11.097 null0 00:24:11.097 null1 00:24:11.097 null2 00:24:11.097 [2024-12-05 13:28:33.436339] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:24:11.097 [2024-12-05 13:28:33.436392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011219 ] 00:24:11.097 [2024-12-05 13:28:33.438381] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.098 [2024-12-05 13:28:33.462558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1011219 /var/tmp/tgt2.sock 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1011219 ']' 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:11.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:11.098 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:11.098 [2024-12-05 13:28:33.533063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.098 [2024-12-05 13:28:33.568853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.357 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:11.357 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:11.357 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:11.617 [2024-12-05 13:28:34.056080] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.617 [2024-12-05 13:28:34.072217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:11.617 nvme0n1 nvme0n2 00:24:11.617 nvme1n1 00:24:11.617 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:11.617 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:11.617 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:13.037 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 283654b9-fcbc-4361-bb67-853ed718b3a2 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=283654b9fcbc4361bb67853ed718b3a2 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 283654B9FCBC4361BB67853ED718B3A2 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 283654B9FCBC4361BB67853ED718B3A2 == \2\8\3\6\5\4\B\9\F\C\B\C\4\3\6\1\B\B\6\7\8\5\3\E\D\7\1\8\B\3\A\2 ]] 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 908963d7-e26d-4af1-a5a0-d094618e152b 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=908963d7e26d4af1a5a0d094618e152b 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 908963D7E26D4AF1A5A0D094618E152B 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 908963D7E26D4AF1A5A0D094618E152B == \9\0\8\9\6\3\D\7\E\2\6\D\4\A\F\1\A\5\A\0\D\0\9\4\6\1\8\E\1\5\2\B ]] 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid d4a35bae-1a24-45eb-92fd-1d3487893d9b 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d4a35bae1a2445eb92fd1d3487893d9b 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D4A35BAE1A2445EB92FD1D3487893D9B 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ D4A35BAE1A2445EB92FD1D3487893D9B == \D\4\A\3\5\B\A\E\1\A\2\4\4\5\E\B\9\2\F\D\1\D\3\4\8\7\8\9\3\D\9\B ]] 00:24:14.417 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1011219 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1011219 ']' 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1011219 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1011219 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1011219' 00:24:14.678 killing process with pid 1011219 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1011219 00:24:14.678 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1011219 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.939 rmmod nvme_tcp 00:24:14.939 rmmod nvme_fabrics 00:24:14.939 rmmod nvme_keyring 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1010895 ']' 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1010895 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1010895 ']' 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1010895 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010895 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010895' 00:24:14.939 killing process with pid 1010895 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1010895 00:24:14.939 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1010895 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.200 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.114 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:17.114 00:24:17.114 real 0m15.886s 00:24:17.114 user 0m11.610s 00:24:17.114 sys 0m7.481s 00:24:17.114 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.114 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:17.114 ************************************ 00:24:17.114 END TEST nvmf_nsid 00:24:17.114 ************************************ 00:24:17.114 13:28:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:17.114 00:24:17.114 real 13m29.472s 00:24:17.114 user 27m41.723s 00:24:17.114 sys 4m10.345s 00:24:17.114 13:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:17.114 13:28:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:17.114 ************************************ 00:24:17.114 END TEST nvmf_target_extra 00:24:17.114 ************************************ 00:24:17.374 13:28:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:17.374 13:28:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.374 13:28:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.374 13:28:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:17.374 ************************************ 00:24:17.374 START TEST nvmf_host 00:24:17.374 ************************************ 00:24:17.374 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:17.374 * Looking for test storage... 00:24:17.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:17.374 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:17.375 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:17.375 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.636 --rc genhtml_branch_coverage=1 00:24:17.636 --rc genhtml_function_coverage=1 00:24:17.636 --rc genhtml_legend=1 00:24:17.636 --rc geninfo_all_blocks=1 00:24:17.636 --rc geninfo_unexecuted_blocks=1 00:24:17.636 00:24:17.636 ' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.636 --rc genhtml_branch_coverage=1 00:24:17.636 --rc genhtml_function_coverage=1 00:24:17.636 --rc genhtml_legend=1 00:24:17.636 --rc geninfo_all_blocks=1 00:24:17.636 --rc geninfo_unexecuted_blocks=1 00:24:17.636 00:24:17.636 ' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.636 --rc genhtml_branch_coverage=1 00:24:17.636 --rc genhtml_function_coverage=1 00:24:17.636 --rc genhtml_legend=1 00:24:17.636 --rc geninfo_all_blocks=1 00:24:17.636 --rc geninfo_unexecuted_blocks=1 00:24:17.636 00:24:17.636 ' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.636 --rc genhtml_branch_coverage=1 00:24:17.636 --rc genhtml_function_coverage=1 00:24:17.636 --rc genhtml_legend=1 00:24:17.636 --rc geninfo_all_blocks=1 00:24:17.636 --rc geninfo_unexecuted_blocks=1 00:24:17.636 00:24:17.636 ' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:17.636 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:17.637 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:17.637 13:28:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.637 ************************************ 00:24:17.637 START TEST nvmf_multicontroller 00:24:17.637 ************************************ 00:24:17.637 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:17.637 * Looking for test storage... 00:24:17.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:17.637 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:17.637 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:17.637 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:17.897 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.898 --rc genhtml_branch_coverage=1 00:24:17.898 --rc genhtml_function_coverage=1 00:24:17.898 --rc genhtml_legend=1 00:24:17.898 --rc geninfo_all_blocks=1 00:24:17.898 --rc geninfo_unexecuted_blocks=1 00:24:17.898 00:24:17.898 ' 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.898 --rc genhtml_branch_coverage=1 00:24:17.898 --rc genhtml_function_coverage=1 00:24:17.898 --rc genhtml_legend=1 00:24:17.898 --rc geninfo_all_blocks=1 00:24:17.898 --rc geninfo_unexecuted_blocks=1 00:24:17.898 00:24:17.898 ' 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.898 --rc genhtml_branch_coverage=1 00:24:17.898 --rc genhtml_function_coverage=1 00:24:17.898 --rc genhtml_legend=1 00:24:17.898 --rc geninfo_all_blocks=1 00:24:17.898 --rc geninfo_unexecuted_blocks=1 00:24:17.898 00:24:17.898 ' 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:17.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:17.898 --rc genhtml_branch_coverage=1 00:24:17.898 --rc genhtml_function_coverage=1 00:24:17.898 --rc genhtml_legend=1 00:24:17.898 --rc geninfo_all_blocks=1 00:24:17.898 --rc geninfo_unexecuted_blocks=1 00:24:17.898 00:24:17.898 ' 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:17.898 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:17.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.899 13:28:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.034 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:26.035 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:26.035 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:26.035 Found net devices under 0000:31:00.0: cvl_0_0 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:26.035 Found net devices under 0000:31:00.1: cvl_0_1 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:24:26.035 00:24:26.035 --- 10.0.0.2 ping statistics --- 00:24:26.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.035 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:24:26.035 00:24:26.035 --- 10.0.0.1 ping statistics --- 00:24:26.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.035 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1016708 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1016708 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1016708 ']' 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.035 13:28:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:26.296 [2024-12-05 13:28:48.636765] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:24:26.296 [2024-12-05 13:28:48.636829] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.296 [2024-12-05 13:28:48.751593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:26.296 [2024-12-05 13:28:48.803743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.296 [2024-12-05 13:28:48.803802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.296 [2024-12-05 13:28:48.803811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.296 [2024-12-05 13:28:48.803819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.296 [2024-12-05 13:28:48.803825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.296 [2024-12-05 13:28:48.805979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.296 [2024-12-05 13:28:48.806279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:26.297 [2024-12-05 13:28:48.806280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.238 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.238 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:27.238 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.238 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 [2024-12-05 13:28:49.499678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 Malloc0 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 [2024-12-05 13:28:49.563031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 [2024-12-05 13:28:49.574951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 Malloc1 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1017036 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1017036 /var/tmp/bdevperf.sock 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1017036 ']' 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:27.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.239 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.501 NVMe0n1 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.501 1 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.501 13:28:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.501 request: 00:24:27.501 { 00:24:27.501 "name": "NVMe0", 00:24:27.501 "trtype": "tcp", 00:24:27.501 "traddr": "10.0.0.2", 00:24:27.501 "adrfam": "ipv4", 00:24:27.501 "trsvcid": "4420", 00:24:27.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.501 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:27.501 "hostaddr": "10.0.0.1", 00:24:27.501 "prchk_reftag": false, 00:24:27.501 "prchk_guard": false, 00:24:27.501 "hdgst": false, 00:24:27.501 "ddgst": false, 00:24:27.501 "allow_unrecognized_csi": false, 00:24:27.501 "method": "bdev_nvme_attach_controller", 00:24:27.501 "req_id": 1 00:24:27.501 } 00:24:27.501 Got JSON-RPC error response 00:24:27.501 response: 00:24:27.501 { 00:24:27.501 "code": -114, 00:24:27.501 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:27.501 } 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.501 request: 00:24:27.501 { 00:24:27.501 "name": "NVMe0", 00:24:27.501 "trtype": "tcp", 00:24:27.501 "traddr": "10.0.0.2", 00:24:27.501 "adrfam": "ipv4", 00:24:27.501 "trsvcid": "4420", 00:24:27.501 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:27.501 "hostaddr": "10.0.0.1", 00:24:27.501 "prchk_reftag": false, 00:24:27.501 "prchk_guard": false, 00:24:27.501 "hdgst": false, 00:24:27.501 "ddgst": false, 00:24:27.501 "allow_unrecognized_csi": false, 00:24:27.501 "method": "bdev_nvme_attach_controller", 00:24:27.501 "req_id": 1 00:24:27.501 } 00:24:27.501 Got JSON-RPC error response 00:24:27.501 response: 00:24:27.501 { 00:24:27.501 "code": -114, 00:24:27.501 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:27.501 } 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.501 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.501 request: 00:24:27.501 { 00:24:27.501 "name": "NVMe0", 00:24:27.501 "trtype": "tcp", 00:24:27.501 "traddr": "10.0.0.2", 00:24:27.501 "adrfam": "ipv4", 00:24:27.501 "trsvcid": "4420", 00:24:27.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.501 "hostaddr": "10.0.0.1", 00:24:27.501 "prchk_reftag": false, 00:24:27.501 "prchk_guard": false, 00:24:27.501 "hdgst": false, 00:24:27.501 "ddgst": false, 00:24:27.501 "multipath": "disable", 00:24:27.501 "allow_unrecognized_csi": false, 00:24:27.501 "method": "bdev_nvme_attach_controller", 00:24:27.501 "req_id": 1 00:24:27.501 } 00:24:27.502 Got JSON-RPC error response 00:24:27.502 response: 00:24:27.502 { 00:24:27.502 "code": -114, 00:24:27.502 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:27.502 } 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.502 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.762 request: 00:24:27.762 { 00:24:27.762 "name": "NVMe0", 00:24:27.762 "trtype": "tcp", 00:24:27.762 "traddr": "10.0.0.2", 00:24:27.762 "adrfam": "ipv4", 00:24:27.762 "trsvcid": "4420", 00:24:27.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.762 "hostaddr": "10.0.0.1", 00:24:27.762 "prchk_reftag": false, 00:24:27.762 "prchk_guard": false, 00:24:27.762 "hdgst": false, 00:24:27.762 "ddgst": false, 00:24:27.762 "multipath": "failover", 00:24:27.762 "allow_unrecognized_csi": false, 00:24:27.762 "method": "bdev_nvme_attach_controller", 00:24:27.762 "req_id": 1 00:24:27.762 } 00:24:27.762 Got JSON-RPC error response 00:24:27.762 response: 00:24:27.762 { 00:24:27.762 "code": -114, 00:24:27.762 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:27.762 } 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.762 NVMe0n1 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.762 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.022 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:28.022 13:28:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:29.406 { 00:24:29.406 "results": [ 00:24:29.406 { 00:24:29.406 "job": "NVMe0n1", 00:24:29.406 "core_mask": "0x1", 00:24:29.406 "workload": "write", 00:24:29.406 "status": "finished", 00:24:29.406 "queue_depth": 128, 00:24:29.406 "io_size": 4096, 00:24:29.406 "runtime": 1.006304, 00:24:29.406 "iops": 22103.658536585364, 00:24:29.406 "mibps": 86.34241615853658, 00:24:29.406 "io_failed": 0, 00:24:29.406 "io_timeout": 0, 00:24:29.406 "avg_latency_us": 5773.677502734943, 00:24:29.406 "min_latency_us": 3017.3866666666668, 00:24:29.406 "max_latency_us": 11195.733333333334 00:24:29.406 } 00:24:29.406 ], 00:24:29.406 "core_count": 1 00:24:29.406 } 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1017036 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1017036 ']' 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1017036 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1017036 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1017036' 00:24:29.406 killing process with pid 1017036 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1017036 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1017036 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:29.406 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:29.406 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:29.407 [2024-12-05 13:28:49.697037] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:24:29.407 [2024-12-05 13:28:49.697097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017036 ] 00:24:29.407 [2024-12-05 13:28:49.777880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.407 [2024-12-05 13:28:49.819031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.407 [2024-12-05 13:28:50.435314] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name b648a715-72c5-4c05-b08a-e1b9c2b19a11 already exists 00:24:29.407 [2024-12-05 13:28:50.435345] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:b648a715-72c5-4c05-b08a-e1b9c2b19a11 alias for bdev NVMe1n1 00:24:29.407 [2024-12-05 13:28:50.435354] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:29.407 Running I/O for 1 seconds... 00:24:29.407 22067.00 IOPS, 86.20 MiB/s 00:24:29.407 Latency(us) 00:24:29.407 [2024-12-05T12:28:51.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.407 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:29.407 NVMe0n1 : 1.01 22103.66 86.34 0.00 0.00 5773.68 3017.39 11195.73 00:24:29.407 [2024-12-05T12:28:51.975Z] =================================================================================================================== 00:24:29.407 [2024-12-05T12:28:51.975Z] Total : 22103.66 86.34 0.00 0.00 5773.68 3017.39 11195.73 00:24:29.407 Received shutdown signal, test time was about 1.000000 seconds 00:24:29.407 00:24:29.407 Latency(us) 00:24:29.407 [2024-12-05T12:28:51.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.407 [2024-12-05T12:28:51.975Z] =================================================================================================================== 00:24:29.407 [2024-12-05T12:28:51.975Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.407 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:29.407 rmmod nvme_tcp 00:24:29.407 rmmod nvme_fabrics 00:24:29.407 rmmod nvme_keyring 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1016708 ']' 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1016708 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1016708 ']' 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1016708 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016708 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016708' 00:24:29.407 killing process with pid 1016708 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1016708 00:24:29.407 13:28:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1016708 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:29.668 13:28:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.216 13:28:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.217 00:24:32.217 real 0m14.156s 00:24:32.217 user 0m14.760s 00:24:32.217 sys 0m6.924s 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:32.217 ************************************ 00:24:32.217 END TEST nvmf_multicontroller 00:24:32.217 ************************************ 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.217 ************************************ 00:24:32.217 START TEST nvmf_aer 00:24:32.217 ************************************ 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:32.217 * Looking for test storage... 00:24:32.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:32.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.217 --rc genhtml_branch_coverage=1 00:24:32.217 --rc genhtml_function_coverage=1 00:24:32.217 --rc genhtml_legend=1 00:24:32.217 --rc geninfo_all_blocks=1 00:24:32.217 --rc geninfo_unexecuted_blocks=1 00:24:32.217 00:24:32.217 ' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:32.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.217 --rc genhtml_branch_coverage=1 00:24:32.217 --rc genhtml_function_coverage=1 00:24:32.217 --rc genhtml_legend=1 00:24:32.217 --rc geninfo_all_blocks=1 00:24:32.217 --rc geninfo_unexecuted_blocks=1 00:24:32.217 00:24:32.217 ' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:32.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.217 --rc genhtml_branch_coverage=1 00:24:32.217 --rc genhtml_function_coverage=1 00:24:32.217 --rc genhtml_legend=1 00:24:32.217 --rc geninfo_all_blocks=1 00:24:32.217 --rc geninfo_unexecuted_blocks=1 00:24:32.217 00:24:32.217 ' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:32.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.217 --rc genhtml_branch_coverage=1 00:24:32.217 --rc genhtml_function_coverage=1 00:24:32.217 --rc genhtml_legend=1 00:24:32.217 --rc geninfo_all_blocks=1 00:24:32.217 --rc geninfo_unexecuted_blocks=1 00:24:32.217 00:24:32.217 ' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.217 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.218 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.218 13:28:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.359 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:40.360 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:40.360 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:40.360 Found net devices under 0000:31:00.0: cvl_0_0 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:40.360 Found net devices under 0000:31:00.1: cvl_0_1 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.360 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.360 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.685 ms 00:24:40.360 00:24:40.360 --- 10.0.0.2 ping statistics --- 00:24:40.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.360 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:24:40.360 00:24:40.360 --- 10.0.0.1 ping statistics --- 00:24:40.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.360 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1022120 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1022120 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1022120 ']' 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.360 13:29:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:40.622 [2024-12-05 13:29:02.955453] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:24:40.622 [2024-12-05 13:29:02.955522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.622 [2024-12-05 13:29:03.047199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.622 [2024-12-05 13:29:03.089116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.622 [2024-12-05 13:29:03.089155] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.622 [2024-12-05 13:29:03.089163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.622 [2024-12-05 13:29:03.089169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.622 [2024-12-05 13:29:03.089176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.622 [2024-12-05 13:29:03.090919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.622 [2024-12-05 13:29:03.091195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.622 [2024-12-05 13:29:03.091353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.622 [2024-12-05 13:29:03.091353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:41.192 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.192 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.453 [2024-12-05 13:29:03.807262] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.453 Malloc0 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.453 [2024-12-05 13:29:03.875315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.453 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.453 [ 00:24:41.453 { 00:24:41.453 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:41.453 "subtype": "Discovery", 00:24:41.453 "listen_addresses": [], 00:24:41.453 "allow_any_host": true, 00:24:41.453 "hosts": [] 00:24:41.453 }, 00:24:41.453 { 00:24:41.453 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.453 "subtype": "NVMe", 00:24:41.453 "listen_addresses": [ 00:24:41.453 { 00:24:41.453 "trtype": "TCP", 00:24:41.453 "adrfam": "IPv4", 00:24:41.453 "traddr": "10.0.0.2", 00:24:41.453 "trsvcid": "4420" 00:24:41.453 } 00:24:41.453 ], 00:24:41.453 "allow_any_host": true, 00:24:41.454 "hosts": [], 00:24:41.454 "serial_number": "SPDK00000000000001", 00:24:41.454 "model_number": "SPDK bdev Controller", 00:24:41.454 "max_namespaces": 2, 00:24:41.454 "min_cntlid": 1, 00:24:41.454 "max_cntlid": 65519, 00:24:41.454 "namespaces": [ 00:24:41.454 { 00:24:41.454 "nsid": 1, 00:24:41.454 "bdev_name": "Malloc0", 00:24:41.454 "name": "Malloc0", 00:24:41.454 "nguid": "D9E61089FC684DE5ACB705EEA199E3F0", 00:24:41.454 "uuid": "d9e61089-fc68-4de5-acb7-05eea199e3f0" 00:24:41.454 } 00:24:41.454 ] 00:24:41.454 } 00:24:41.454 ] 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1022417 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:41.454 13:29:03 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:41.454 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:41.454 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:41.454 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:41.454 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.715 Malloc1 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.715 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.715 Asynchronous Event Request test 00:24:41.715 Attaching to 10.0.0.2 00:24:41.715 Attached to 10.0.0.2 00:24:41.715 Registering asynchronous event callbacks... 00:24:41.715 Starting namespace attribute notice tests for all controllers... 00:24:41.715 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:41.715 aer_cb - Changed Namespace 00:24:41.715 Cleaning up... 00:24:41.715 [ 00:24:41.715 { 00:24:41.715 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:41.715 "subtype": "Discovery", 00:24:41.715 "listen_addresses": [], 00:24:41.715 "allow_any_host": true, 00:24:41.715 "hosts": [] 00:24:41.715 }, 00:24:41.715 { 00:24:41.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.715 "subtype": "NVMe", 00:24:41.715 "listen_addresses": [ 00:24:41.715 { 00:24:41.715 "trtype": "TCP", 00:24:41.715 "adrfam": "IPv4", 00:24:41.715 "traddr": "10.0.0.2", 00:24:41.715 "trsvcid": "4420" 00:24:41.715 } 00:24:41.715 ], 00:24:41.715 "allow_any_host": true, 00:24:41.715 "hosts": [], 00:24:41.715 "serial_number": "SPDK00000000000001", 00:24:41.715 "model_number": "SPDK bdev Controller", 00:24:41.715 "max_namespaces": 2, 00:24:41.715 "min_cntlid": 1, 00:24:41.715 "max_cntlid": 65519, 00:24:41.715 "namespaces": [ 00:24:41.715 { 00:24:41.715 "nsid": 1, 00:24:41.715 "bdev_name": "Malloc0", 00:24:41.715 "name": "Malloc0", 00:24:41.715 "nguid": "D9E61089FC684DE5ACB705EEA199E3F0", 00:24:41.715 "uuid": "d9e61089-fc68-4de5-acb7-05eea199e3f0" 00:24:41.715 }, 00:24:41.715 { 00:24:41.715 "nsid": 2, 00:24:41.715 "bdev_name": "Malloc1", 00:24:41.715 "name": "Malloc1", 00:24:41.715 "nguid": "CBE55462A1D94D6681DAC717A110A58B", 00:24:41.976 "uuid": "cbe55462-a1d9-4d66-81da-c717a110a58b" 00:24:41.976 } 00:24:41.976 ] 00:24:41.976 } 00:24:41.976 ] 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1022417 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.976 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.976 rmmod nvme_tcp 00:24:41.976 rmmod nvme_fabrics 00:24:41.977 rmmod nvme_keyring 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1022120 ']' 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1022120 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1022120 ']' 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1022120 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022120 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022120' 00:24:41.977 killing process with pid 1022120 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1022120 00:24:41.977 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1022120 00:24:42.238 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.239 13:29:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.164 13:29:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.164 00:24:44.164 real 0m12.389s 00:24:44.164 user 0m8.542s 00:24:44.164 sys 0m6.708s 00:24:44.164 13:29:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.164 13:29:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:44.164 ************************************ 00:24:44.164 END TEST nvmf_aer 00:24:44.164 ************************************ 00:24:44.164 13:29:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:44.164 13:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.164 13:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.164 13:29:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.519 ************************************ 00:24:44.519 START TEST nvmf_async_init 00:24:44.519 ************************************ 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:44.519 * Looking for test storage... 00:24:44.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.519 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:44.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.520 --rc genhtml_branch_coverage=1 00:24:44.520 --rc genhtml_function_coverage=1 00:24:44.520 --rc genhtml_legend=1 00:24:44.520 --rc geninfo_all_blocks=1 00:24:44.520 --rc geninfo_unexecuted_blocks=1 00:24:44.520 00:24:44.520 ' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:44.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.520 --rc genhtml_branch_coverage=1 00:24:44.520 --rc genhtml_function_coverage=1 00:24:44.520 --rc genhtml_legend=1 00:24:44.520 --rc geninfo_all_blocks=1 00:24:44.520 --rc geninfo_unexecuted_blocks=1 00:24:44.520 00:24:44.520 ' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:44.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.520 --rc genhtml_branch_coverage=1 00:24:44.520 --rc genhtml_function_coverage=1 00:24:44.520 --rc genhtml_legend=1 00:24:44.520 --rc geninfo_all_blocks=1 00:24:44.520 --rc geninfo_unexecuted_blocks=1 00:24:44.520 00:24:44.520 ' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:44.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.520 --rc genhtml_branch_coverage=1 00:24:44.520 --rc genhtml_function_coverage=1 00:24:44.520 --rc genhtml_legend=1 00:24:44.520 --rc geninfo_all_blocks=1 00:24:44.520 --rc geninfo_unexecuted_blocks=1 00:24:44.520 00:24:44.520 ' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=5e188966f08640888a1a02fa30b39205 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.520 13:29:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:52.756 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:52.756 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:52.756 Found net devices under 0000:31:00.0: cvl_0_0 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:52.756 Found net devices under 0000:31:00.1: cvl_0_1 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.756 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.757 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.017 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.017 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.017 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:53.018 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.018 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.552 ms 00:24:53.018 00:24:53.018 --- 10.0.0.2 ping statistics --- 00:24:53.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.018 rtt min/avg/max/mdev = 0.552/0.552/0.552/0.000 ms 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.018 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.018 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:24:53.018 00:24:53.018 --- 10.0.0.1 ping statistics --- 00:24:53.018 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.018 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:53.018 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1027407 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1027407 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1027407 ']' 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.278 13:29:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:53.278 [2024-12-05 13:29:15.673106] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:24:53.278 [2024-12-05 13:29:15.673173] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.278 [2024-12-05 13:29:15.761330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.278 [2024-12-05 13:29:15.796005] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.278 [2024-12-05 13:29:15.796039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.278 [2024-12-05 13:29:15.796047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.278 [2024-12-05 13:29:15.796054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.278 [2024-12-05 13:29:15.796059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.278 [2024-12-05 13:29:15.796636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.218 [2024-12-05 13:29:16.517876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.218 null0 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 5e188966f08640888a1a02fa30b39205 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.218 [2024-12-05 13:29:16.578163] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.218 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.478 nvme0n1 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.478 [ 00:24:54.478 { 00:24:54.478 "name": "nvme0n1", 00:24:54.478 "aliases": [ 00:24:54.478 "5e188966-f086-4088-8a1a-02fa30b39205" 00:24:54.478 ], 00:24:54.478 "product_name": "NVMe disk", 00:24:54.478 "block_size": 512, 00:24:54.478 "num_blocks": 2097152, 00:24:54.478 "uuid": "5e188966-f086-4088-8a1a-02fa30b39205", 00:24:54.478 "numa_id": 0, 00:24:54.478 "assigned_rate_limits": { 00:24:54.478 "rw_ios_per_sec": 0, 00:24:54.478 "rw_mbytes_per_sec": 0, 00:24:54.478 "r_mbytes_per_sec": 0, 00:24:54.478 "w_mbytes_per_sec": 0 00:24:54.478 }, 00:24:54.478 "claimed": false, 00:24:54.478 "zoned": false, 00:24:54.478 "supported_io_types": { 00:24:54.478 "read": true, 00:24:54.478 "write": true, 00:24:54.478 "unmap": false, 00:24:54.478 "flush": true, 00:24:54.478 "reset": true, 00:24:54.478 "nvme_admin": true, 00:24:54.478 "nvme_io": true, 00:24:54.478 "nvme_io_md": false, 00:24:54.478 "write_zeroes": true, 00:24:54.478 "zcopy": false, 00:24:54.478 "get_zone_info": false, 00:24:54.478 "zone_management": false, 00:24:54.478 "zone_append": false, 00:24:54.478 "compare": true, 00:24:54.478 "compare_and_write": true, 00:24:54.478 "abort": true, 00:24:54.478 "seek_hole": false, 00:24:54.478 "seek_data": false, 00:24:54.478 "copy": true, 00:24:54.478 "nvme_iov_md": false 00:24:54.478 }, 00:24:54.478 "memory_domains": [ 00:24:54.478 { 00:24:54.478 "dma_device_id": "system", 00:24:54.478 "dma_device_type": 1 00:24:54.478 } 00:24:54.478 ], 00:24:54.478 "driver_specific": { 00:24:54.478 "nvme": [ 00:24:54.478 { 00:24:54.478 "trid": { 00:24:54.478 "trtype": "TCP", 00:24:54.478 "adrfam": "IPv4", 00:24:54.478 "traddr": "10.0.0.2", 00:24:54.478 "trsvcid": "4420", 00:24:54.478 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:54.478 }, 00:24:54.478 "ctrlr_data": { 00:24:54.478 "cntlid": 1, 00:24:54.478 "vendor_id": "0x8086", 00:24:54.478 "model_number": "SPDK bdev Controller", 00:24:54.478 "serial_number": "00000000000000000000", 00:24:54.478 "firmware_revision": "25.01", 00:24:54.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.478 "oacs": { 00:24:54.478 "security": 0, 00:24:54.478 "format": 0, 00:24:54.478 "firmware": 0, 00:24:54.478 "ns_manage": 0 00:24:54.478 }, 00:24:54.478 "multi_ctrlr": true, 00:24:54.478 "ana_reporting": false 00:24:54.478 }, 00:24:54.478 "vs": { 00:24:54.478 "nvme_version": "1.3" 00:24:54.478 }, 00:24:54.478 "ns_data": { 00:24:54.478 "id": 1, 00:24:54.478 "can_share": true 00:24:54.478 } 00:24:54.478 } 00:24:54.478 ], 00:24:54.478 "mp_policy": "active_passive" 00:24:54.478 } 00:24:54.478 } 00:24:54.478 ] 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.478 [2024-12-05 13:29:16.852354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:54.478 [2024-12-05 13:29:16.852422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e85040 (9): Bad file descriptor 00:24:54.478 [2024-12-05 13:29:16.983956] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.478 13:29:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.478 [ 00:24:54.478 { 00:24:54.478 "name": "nvme0n1", 00:24:54.478 "aliases": [ 00:24:54.478 "5e188966-f086-4088-8a1a-02fa30b39205" 00:24:54.478 ], 00:24:54.478 "product_name": "NVMe disk", 00:24:54.478 "block_size": 512, 00:24:54.478 "num_blocks": 2097152, 00:24:54.478 "uuid": "5e188966-f086-4088-8a1a-02fa30b39205", 00:24:54.478 "numa_id": 0, 00:24:54.478 "assigned_rate_limits": { 00:24:54.478 "rw_ios_per_sec": 0, 00:24:54.478 "rw_mbytes_per_sec": 0, 00:24:54.478 "r_mbytes_per_sec": 0, 00:24:54.478 "w_mbytes_per_sec": 0 00:24:54.478 }, 00:24:54.478 "claimed": false, 00:24:54.478 "zoned": false, 00:24:54.478 "supported_io_types": { 00:24:54.478 "read": true, 00:24:54.478 "write": true, 00:24:54.478 "unmap": false, 00:24:54.478 "flush": true, 00:24:54.478 "reset": true, 00:24:54.478 "nvme_admin": true, 00:24:54.478 "nvme_io": true, 00:24:54.478 "nvme_io_md": false, 00:24:54.478 "write_zeroes": true, 00:24:54.478 "zcopy": false, 00:24:54.478 "get_zone_info": false, 00:24:54.478 "zone_management": false, 00:24:54.478 "zone_append": false, 00:24:54.478 "compare": true, 00:24:54.478 "compare_and_write": true, 00:24:54.478 "abort": true, 00:24:54.478 "seek_hole": false, 00:24:54.478 "seek_data": false, 00:24:54.478 "copy": true, 00:24:54.478 "nvme_iov_md": false 00:24:54.478 }, 00:24:54.478 "memory_domains": [ 00:24:54.478 { 00:24:54.478 "dma_device_id": "system", 00:24:54.478 "dma_device_type": 1 00:24:54.478 } 00:24:54.478 ], 00:24:54.478 "driver_specific": { 00:24:54.478 "nvme": [ 00:24:54.478 { 00:24:54.478 "trid": { 00:24:54.478 "trtype": "TCP", 00:24:54.478 "adrfam": "IPv4", 00:24:54.478 "traddr": "10.0.0.2", 00:24:54.478 "trsvcid": "4420", 00:24:54.478 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:54.478 }, 00:24:54.478 "ctrlr_data": { 00:24:54.478 "cntlid": 2, 00:24:54.478 "vendor_id": "0x8086", 00:24:54.478 "model_number": "SPDK bdev Controller", 00:24:54.478 "serial_number": "00000000000000000000", 00:24:54.478 "firmware_revision": "25.01", 00:24:54.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.478 "oacs": { 00:24:54.478 "security": 0, 00:24:54.478 "format": 0, 00:24:54.478 "firmware": 0, 00:24:54.478 "ns_manage": 0 00:24:54.478 }, 00:24:54.478 "multi_ctrlr": true, 00:24:54.478 "ana_reporting": false 00:24:54.478 }, 00:24:54.478 "vs": { 00:24:54.478 "nvme_version": "1.3" 00:24:54.478 }, 00:24:54.478 "ns_data": { 00:24:54.478 "id": 1, 00:24:54.478 "can_share": true 00:24:54.478 } 00:24:54.478 } 00:24:54.478 ], 00:24:54.478 "mp_policy": "active_passive" 00:24:54.478 } 00:24:54.478 } 00:24:54.478 ] 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ojPh9hpWZd 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ojPh9hpWZd 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.ojPh9hpWZd 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.478 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.739 [2024-12-05 13:29:17.073032] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.739 [2024-12-05 13:29:17.073140] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.739 [2024-12-05 13:29:17.097114] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:54.739 nvme0n1 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:54.739 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.740 [ 00:24:54.740 { 00:24:54.740 "name": "nvme0n1", 00:24:54.740 "aliases": [ 00:24:54.740 "5e188966-f086-4088-8a1a-02fa30b39205" 00:24:54.740 ], 00:24:54.740 "product_name": "NVMe disk", 00:24:54.740 "block_size": 512, 00:24:54.740 "num_blocks": 2097152, 00:24:54.740 "uuid": "5e188966-f086-4088-8a1a-02fa30b39205", 00:24:54.740 "numa_id": 0, 00:24:54.740 "assigned_rate_limits": { 00:24:54.740 "rw_ios_per_sec": 0, 00:24:54.740 "rw_mbytes_per_sec": 0, 00:24:54.740 "r_mbytes_per_sec": 0, 00:24:54.740 "w_mbytes_per_sec": 0 00:24:54.740 }, 00:24:54.740 "claimed": false, 00:24:54.740 "zoned": false, 00:24:54.740 "supported_io_types": { 00:24:54.740 "read": true, 00:24:54.740 "write": true, 00:24:54.740 "unmap": false, 00:24:54.740 "flush": true, 00:24:54.740 "reset": true, 00:24:54.740 "nvme_admin": true, 00:24:54.740 "nvme_io": true, 00:24:54.740 "nvme_io_md": false, 00:24:54.740 "write_zeroes": true, 00:24:54.740 "zcopy": false, 00:24:54.740 "get_zone_info": false, 00:24:54.740 "zone_management": false, 00:24:54.740 "zone_append": false, 00:24:54.740 "compare": true, 00:24:54.740 "compare_and_write": true, 00:24:54.740 "abort": true, 00:24:54.740 "seek_hole": false, 00:24:54.740 "seek_data": false, 00:24:54.740 "copy": true, 00:24:54.740 "nvme_iov_md": false 00:24:54.740 }, 00:24:54.740 "memory_domains": [ 00:24:54.740 { 00:24:54.740 "dma_device_id": "system", 00:24:54.740 "dma_device_type": 1 00:24:54.740 } 00:24:54.740 ], 00:24:54.740 "driver_specific": { 00:24:54.740 "nvme": [ 00:24:54.740 { 00:24:54.740 "trid": { 00:24:54.740 "trtype": "TCP", 00:24:54.740 "adrfam": "IPv4", 00:24:54.740 "traddr": "10.0.0.2", 00:24:54.740 "trsvcid": "4421", 00:24:54.740 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:54.740 }, 00:24:54.740 "ctrlr_data": { 00:24:54.740 "cntlid": 3, 00:24:54.740 "vendor_id": "0x8086", 00:24:54.740 "model_number": "SPDK bdev Controller", 00:24:54.740 "serial_number": "00000000000000000000", 00:24:54.740 "firmware_revision": "25.01", 00:24:54.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:54.740 "oacs": { 00:24:54.740 "security": 0, 00:24:54.740 "format": 0, 00:24:54.740 "firmware": 0, 00:24:54.740 "ns_manage": 0 00:24:54.740 }, 00:24:54.740 "multi_ctrlr": true, 00:24:54.740 "ana_reporting": false 00:24:54.740 }, 00:24:54.740 "vs": { 00:24:54.740 "nvme_version": "1.3" 00:24:54.740 }, 00:24:54.740 "ns_data": { 00:24:54.740 "id": 1, 00:24:54.740 "can_share": true 00:24:54.740 } 00:24:54.740 } 00:24:54.740 ], 00:24:54.740 "mp_policy": "active_passive" 00:24:54.740 } 00:24:54.740 } 00:24:54.740 ] 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.ojPh9hpWZd 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.740 rmmod nvme_tcp 00:24:54.740 rmmod nvme_fabrics 00:24:54.740 rmmod nvme_keyring 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1027407 ']' 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1027407 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1027407 ']' 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1027407 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.740 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1027407 00:24:54.999 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.999 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1027407' 00:24:55.000 killing process with pid 1027407 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1027407 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1027407 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:55.000 13:29:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:57.542 00:24:57.542 real 0m12.802s 00:24:57.542 user 0m4.427s 00:24:57.542 sys 0m6.933s 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:57.542 ************************************ 00:24:57.542 END TEST nvmf_async_init 00:24:57.542 ************************************ 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.542 ************************************ 00:24:57.542 START TEST dma 00:24:57.542 ************************************ 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:57.542 * Looking for test storage... 00:24:57.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.542 --rc genhtml_branch_coverage=1 00:24:57.542 --rc genhtml_function_coverage=1 00:24:57.542 --rc genhtml_legend=1 00:24:57.542 --rc geninfo_all_blocks=1 00:24:57.542 --rc geninfo_unexecuted_blocks=1 00:24:57.542 00:24:57.542 ' 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.542 --rc genhtml_branch_coverage=1 00:24:57.542 --rc genhtml_function_coverage=1 00:24:57.542 --rc genhtml_legend=1 00:24:57.542 --rc geninfo_all_blocks=1 00:24:57.542 --rc geninfo_unexecuted_blocks=1 00:24:57.542 00:24:57.542 ' 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.542 --rc genhtml_branch_coverage=1 00:24:57.542 --rc genhtml_function_coverage=1 00:24:57.542 --rc genhtml_legend=1 00:24:57.542 --rc geninfo_all_blocks=1 00:24:57.542 --rc geninfo_unexecuted_blocks=1 00:24:57.542 00:24:57.542 ' 00:24:57.542 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.542 --rc genhtml_branch_coverage=1 00:24:57.542 --rc genhtml_function_coverage=1 00:24:57.543 --rc genhtml_legend=1 00:24:57.543 --rc geninfo_all_blocks=1 00:24:57.543 --rc geninfo_unexecuted_blocks=1 00:24:57.543 00:24:57.543 ' 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:57.543 00:24:57.543 real 0m0.237s 00:24:57.543 user 0m0.142s 00:24:57.543 sys 0m0.111s 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:57.543 ************************************ 00:24:57.543 END TEST dma 00:24:57.543 ************************************ 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.543 ************************************ 00:24:57.543 START TEST nvmf_identify 00:24:57.543 ************************************ 00:24:57.543 13:29:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:57.543 * Looking for test storage... 00:24:57.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.543 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.543 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.543 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.805 --rc genhtml_branch_coverage=1 00:24:57.805 --rc genhtml_function_coverage=1 00:24:57.805 --rc genhtml_legend=1 00:24:57.805 --rc geninfo_all_blocks=1 00:24:57.805 --rc geninfo_unexecuted_blocks=1 00:24:57.805 00:24:57.805 ' 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.805 --rc genhtml_branch_coverage=1 00:24:57.805 --rc genhtml_function_coverage=1 00:24:57.805 --rc genhtml_legend=1 00:24:57.805 --rc geninfo_all_blocks=1 00:24:57.805 --rc geninfo_unexecuted_blocks=1 00:24:57.805 00:24:57.805 ' 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.805 --rc genhtml_branch_coverage=1 00:24:57.805 --rc genhtml_function_coverage=1 00:24:57.805 --rc genhtml_legend=1 00:24:57.805 --rc geninfo_all_blocks=1 00:24:57.805 --rc geninfo_unexecuted_blocks=1 00:24:57.805 00:24:57.805 ' 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.805 --rc genhtml_branch_coverage=1 00:24:57.805 --rc genhtml_function_coverage=1 00:24:57.805 --rc genhtml_legend=1 00:24:57.805 --rc geninfo_all_blocks=1 00:24:57.805 --rc geninfo_unexecuted_blocks=1 00:24:57.805 00:24:57.805 ' 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.805 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:57.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:57.806 13:29:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:05.947 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:05.947 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:05.947 Found net devices under 0000:31:00.0: cvl_0_0 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:05.947 Found net devices under 0000:31:00.1: cvl_0_1 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:25:05.947 00:25:05.947 --- 10.0.0.2 ping statistics --- 00:25:05.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.947 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:25:05.947 00:25:05.947 --- 10.0.0.1 ping statistics --- 00:25:05.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.947 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.947 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:05.948 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.948 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.948 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.948 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.948 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.948 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.948 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:06.209 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:06.209 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:06.209 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:06.209 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1032523 00:25:06.209 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1032523 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1032523 ']' 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.210 13:29:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:06.210 [2024-12-05 13:29:28.585585] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:25:06.210 [2024-12-05 13:29:28.585647] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.210 [2024-12-05 13:29:28.675672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:06.210 [2024-12-05 13:29:28.718476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.210 [2024-12-05 13:29:28.718514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.210 [2024-12-05 13:29:28.718522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.210 [2024-12-05 13:29:28.718532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.210 [2024-12-05 13:29:28.718538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.210 [2024-12-05 13:29:28.720393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.210 [2024-12-05 13:29:28.720508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:06.210 [2024-12-05 13:29:28.720666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.210 [2024-12-05 13:29:28.720666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 [2024-12-05 13:29:29.400885] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 Malloc0 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 [2024-12-05 13:29:29.512329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.152 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.152 [ 00:25:07.152 { 00:25:07.152 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:07.152 "subtype": "Discovery", 00:25:07.152 "listen_addresses": [ 00:25:07.152 { 00:25:07.152 "trtype": "TCP", 00:25:07.152 "adrfam": "IPv4", 00:25:07.152 "traddr": "10.0.0.2", 00:25:07.152 "trsvcid": "4420" 00:25:07.152 } 00:25:07.152 ], 00:25:07.152 "allow_any_host": true, 00:25:07.152 "hosts": [] 00:25:07.152 }, 00:25:07.152 { 00:25:07.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.153 "subtype": "NVMe", 00:25:07.153 "listen_addresses": [ 00:25:07.153 { 00:25:07.153 "trtype": "TCP", 00:25:07.153 "adrfam": "IPv4", 00:25:07.153 "traddr": "10.0.0.2", 00:25:07.153 "trsvcid": "4420" 00:25:07.153 } 00:25:07.153 ], 00:25:07.153 "allow_any_host": true, 00:25:07.153 "hosts": [], 00:25:07.153 "serial_number": "SPDK00000000000001", 00:25:07.153 "model_number": "SPDK bdev Controller", 00:25:07.153 "max_namespaces": 32, 00:25:07.153 "min_cntlid": 1, 00:25:07.153 "max_cntlid": 65519, 00:25:07.153 "namespaces": [ 00:25:07.153 { 00:25:07.153 "nsid": 1, 00:25:07.153 "bdev_name": "Malloc0", 00:25:07.153 "name": "Malloc0", 00:25:07.153 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:07.153 "eui64": "ABCDEF0123456789", 00:25:07.153 "uuid": "3d429515-c739-4b68-9c4e-2222ffbb77fc" 00:25:07.153 } 00:25:07.153 ] 00:25:07.153 } 00:25:07.153 ] 00:25:07.153 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.153 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:07.153 [2024-12-05 13:29:29.576286] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:25:07.153 [2024-12-05 13:29:29.576336] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032658 ] 00:25:07.153 [2024-12-05 13:29:29.631091] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:07.153 [2024-12-05 13:29:29.631140] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:07.153 [2024-12-05 13:29:29.631145] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:07.153 [2024-12-05 13:29:29.631162] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:07.153 [2024-12-05 13:29:29.631170] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:07.153 [2024-12-05 13:29:29.635152] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:07.153 [2024-12-05 13:29:29.635187] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a33550 0 00:25:07.153 [2024-12-05 13:29:29.642875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:07.153 [2024-12-05 13:29:29.642888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:07.153 [2024-12-05 13:29:29.642893] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:07.153 [2024-12-05 13:29:29.642896] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:07.153 [2024-12-05 13:29:29.642929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.642935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.642940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.153 [2024-12-05 13:29:29.642954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:07.153 [2024-12-05 13:29:29.642972] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.153 [2024-12-05 13:29:29.650875] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.153 [2024-12-05 13:29:29.650885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.153 [2024-12-05 13:29:29.650889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.650894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.153 [2024-12-05 13:29:29.650905] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:07.153 [2024-12-05 13:29:29.650912] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:07.153 [2024-12-05 13:29:29.650921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:07.153 [2024-12-05 13:29:29.650937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.650941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.650944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.153 [2024-12-05 13:29:29.650952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.153 [2024-12-05 13:29:29.650966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.153 [2024-12-05 13:29:29.651135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.153 [2024-12-05 13:29:29.651143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.153 [2024-12-05 13:29:29.651146] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.153 [2024-12-05 13:29:29.651158] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:07.153 [2024-12-05 13:29:29.651166] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:07.153 [2024-12-05 13:29:29.651173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651180] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.153 [2024-12-05 13:29:29.651187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.153 [2024-12-05 13:29:29.651198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.153 [2024-12-05 13:29:29.651401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.153 [2024-12-05 13:29:29.651407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.153 [2024-12-05 13:29:29.651411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.153 [2024-12-05 13:29:29.651420] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:07.153 [2024-12-05 13:29:29.651428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:07.153 [2024-12-05 13:29:29.651435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651439] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651442] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.153 [2024-12-05 13:29:29.651449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.153 [2024-12-05 13:29:29.651459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.153 [2024-12-05 13:29:29.651669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.153 [2024-12-05 13:29:29.651675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.153 [2024-12-05 13:29:29.651679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.153 [2024-12-05 13:29:29.651688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:07.153 [2024-12-05 13:29:29.651700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651708] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.153 [2024-12-05 13:29:29.651714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.153 [2024-12-05 13:29:29.651724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.153 [2024-12-05 13:29:29.651901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.153 [2024-12-05 13:29:29.651908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.153 [2024-12-05 13:29:29.651912] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.651915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.153 [2024-12-05 13:29:29.651920] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:07.153 [2024-12-05 13:29:29.651926] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:07.153 [2024-12-05 13:29:29.651933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:07.153 [2024-12-05 13:29:29.652042] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:07.153 [2024-12-05 13:29:29.652047] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:07.153 [2024-12-05 13:29:29.652055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.652059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.652063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.153 [2024-12-05 13:29:29.652070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.153 [2024-12-05 13:29:29.652080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.153 [2024-12-05 13:29:29.652278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.153 [2024-12-05 13:29:29.652284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.153 [2024-12-05 13:29:29.652288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.652292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.153 [2024-12-05 13:29:29.652297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:07.153 [2024-12-05 13:29:29.652306] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.652310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.153 [2024-12-05 13:29:29.652313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.153 [2024-12-05 13:29:29.652320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.154 [2024-12-05 13:29:29.652330] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.154 [2024-12-05 13:29:29.652546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.154 [2024-12-05 13:29:29.652552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.154 [2024-12-05 13:29:29.652555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.652559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.154 [2024-12-05 13:29:29.652568] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:07.154 [2024-12-05 13:29:29.652574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:07.154 [2024-12-05 13:29:29.652581] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:07.154 [2024-12-05 13:29:29.652594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:07.154 [2024-12-05 13:29:29.652603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.652607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.652614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.154 [2024-12-05 13:29:29.652625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.154 [2024-12-05 13:29:29.652841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.154 [2024-12-05 13:29:29.652848] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.154 [2024-12-05 13:29:29.652851] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.652855] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a33550): datao=0, datal=4096, cccid=0 00:25:07.154 [2024-12-05 13:29:29.652860] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95100) on tqpair(0x1a33550): expected_datao=0, payload_size=4096 00:25:07.154 [2024-12-05 13:29:29.652871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.652887] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.652892] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.697872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.154 [2024-12-05 13:29:29.697887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.154 [2024-12-05 13:29:29.697891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.697896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.154 [2024-12-05 13:29:29.697905] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:07.154 [2024-12-05 13:29:29.697911] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:07.154 [2024-12-05 13:29:29.697916] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:07.154 [2024-12-05 13:29:29.697922] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:07.154 [2024-12-05 13:29:29.697927] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:07.154 [2024-12-05 13:29:29.697932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:07.154 [2024-12-05 13:29:29.697940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:07.154 [2024-12-05 13:29:29.697948] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.697953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.697958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.697967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.154 [2024-12-05 13:29:29.697985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.154 [2024-12-05 13:29:29.698187] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.154 [2024-12-05 13:29:29.698194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.154 [2024-12-05 13:29:29.698198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.154 [2024-12-05 13:29:29.698209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.698223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.154 [2024-12-05 13:29:29.698230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698234] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698237] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.698243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.154 [2024-12-05 13:29:29.698249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.698263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.154 [2024-12-05 13:29:29.698269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.698282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.154 [2024-12-05 13:29:29.698287] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:07.154 [2024-12-05 13:29:29.698299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:07.154 [2024-12-05 13:29:29.698305] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698309] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.698316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.154 [2024-12-05 13:29:29.698328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95100, cid 0, qid 0 00:25:07.154 [2024-12-05 13:29:29.698334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95280, cid 1, qid 0 00:25:07.154 [2024-12-05 13:29:29.698338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95400, cid 2, qid 0 00:25:07.154 [2024-12-05 13:29:29.698343] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.154 [2024-12-05 13:29:29.698348] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95700, cid 4, qid 0 00:25:07.154 [2024-12-05 13:29:29.698584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.154 [2024-12-05 13:29:29.698590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.154 [2024-12-05 13:29:29.698594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95700) on tqpair=0x1a33550 00:25:07.154 [2024-12-05 13:29:29.698605] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:07.154 [2024-12-05 13:29:29.698610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:07.154 [2024-12-05 13:29:29.698622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.698632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.154 [2024-12-05 13:29:29.698642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95700, cid 4, qid 0 00:25:07.154 [2024-12-05 13:29:29.698867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.154 [2024-12-05 13:29:29.698874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.154 [2024-12-05 13:29:29.698878] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698882] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a33550): datao=0, datal=4096, cccid=4 00:25:07.154 [2024-12-05 13:29:29.698887] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95700) on tqpair(0x1a33550): expected_datao=0, payload_size=4096 00:25:07.154 [2024-12-05 13:29:29.698891] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698898] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.698902] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.699117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.154 [2024-12-05 13:29:29.699123] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.154 [2024-12-05 13:29:29.699127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.699131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95700) on tqpair=0x1a33550 00:25:07.154 [2024-12-05 13:29:29.699143] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:07.154 [2024-12-05 13:29:29.699165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.699170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.699176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.154 [2024-12-05 13:29:29.699184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.699187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.154 [2024-12-05 13:29:29.699191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a33550) 00:25:07.154 [2024-12-05 13:29:29.699197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.154 [2024-12-05 13:29:29.699211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95700, cid 4, qid 0 00:25:07.155 [2024-12-05 13:29:29.699216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95880, cid 5, qid 0 00:25:07.155 [2024-12-05 13:29:29.699435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.155 [2024-12-05 13:29:29.699442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.155 [2024-12-05 13:29:29.699446] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.155 [2024-12-05 13:29:29.699449] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a33550): datao=0, datal=1024, cccid=4 00:25:07.155 [2024-12-05 13:29:29.699454] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95700) on tqpair(0x1a33550): expected_datao=0, payload_size=1024 00:25:07.155 [2024-12-05 13:29:29.699460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.155 [2024-12-05 13:29:29.699467] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.155 [2024-12-05 13:29:29.699471] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.155 [2024-12-05 13:29:29.699477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.155 [2024-12-05 13:29:29.699483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.155 [2024-12-05 13:29:29.699486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.155 [2024-12-05 13:29:29.699490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95880) on tqpair=0x1a33550 00:25:07.423 [2024-12-05 13:29:29.740029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.423 [2024-12-05 13:29:29.740041] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.423 [2024-12-05 13:29:29.740045] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.740049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95700) on tqpair=0x1a33550 00:25:07.423 [2024-12-05 13:29:29.740061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.740065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a33550) 00:25:07.423 [2024-12-05 13:29:29.740072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-12-05 13:29:29.740087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95700, cid 4, qid 0 00:25:07.423 [2024-12-05 13:29:29.740346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.423 [2024-12-05 13:29:29.740353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.423 [2024-12-05 13:29:29.740356] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.740360] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a33550): datao=0, datal=3072, cccid=4 00:25:07.423 [2024-12-05 13:29:29.740364] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95700) on tqpair(0x1a33550): expected_datao=0, payload_size=3072 00:25:07.423 [2024-12-05 13:29:29.740369] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.743868] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.743875] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.743883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.423 [2024-12-05 13:29:29.743889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.423 [2024-12-05 13:29:29.743892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.743896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95700) on tqpair=0x1a33550 00:25:07.423 [2024-12-05 13:29:29.743906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.743909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a33550) 00:25:07.423 [2024-12-05 13:29:29.743916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.423 [2024-12-05 13:29:29.743931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95700, cid 4, qid 0 00:25:07.423 [2024-12-05 13:29:29.744119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.423 [2024-12-05 13:29:29.744126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.423 [2024-12-05 13:29:29.744129] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.744133] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a33550): datao=0, datal=8, cccid=4 00:25:07.423 [2024-12-05 13:29:29.744137] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a95700) on tqpair(0x1a33550): expected_datao=0, payload_size=8 00:25:07.423 [2024-12-05 13:29:29.744142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.744151] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.744155] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.785092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.423 [2024-12-05 13:29:29.785102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.423 [2024-12-05 13:29:29.785105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.423 [2024-12-05 13:29:29.785109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95700) on tqpair=0x1a33550 00:25:07.423 ===================================================== 00:25:07.423 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:07.423 ===================================================== 00:25:07.423 Controller Capabilities/Features 00:25:07.423 ================================ 00:25:07.423 Vendor ID: 0000 00:25:07.423 Subsystem Vendor ID: 0000 00:25:07.423 Serial Number: .................... 00:25:07.423 Model Number: ........................................ 00:25:07.423 Firmware Version: 25.01 00:25:07.423 Recommended Arb Burst: 0 00:25:07.424 IEEE OUI Identifier: 00 00 00 00:25:07.424 Multi-path I/O 00:25:07.424 May have multiple subsystem ports: No 00:25:07.424 May have multiple controllers: No 00:25:07.424 Associated with SR-IOV VF: No 00:25:07.424 Max Data Transfer Size: 131072 00:25:07.424 Max Number of Namespaces: 0 00:25:07.424 Max Number of I/O Queues: 1024 00:25:07.424 NVMe Specification Version (VS): 1.3 00:25:07.424 NVMe Specification Version (Identify): 1.3 00:25:07.424 Maximum Queue Entries: 128 00:25:07.424 Contiguous Queues Required: Yes 00:25:07.424 Arbitration Mechanisms Supported 00:25:07.424 Weighted Round Robin: Not Supported 00:25:07.424 Vendor Specific: Not Supported 00:25:07.424 Reset Timeout: 15000 ms 00:25:07.424 Doorbell Stride: 4 bytes 00:25:07.424 NVM Subsystem Reset: Not Supported 00:25:07.424 Command Sets Supported 00:25:07.424 NVM Command Set: Supported 00:25:07.424 Boot Partition: Not Supported 00:25:07.424 Memory Page Size Minimum: 4096 bytes 00:25:07.424 Memory Page Size Maximum: 4096 bytes 00:25:07.424 Persistent Memory Region: Not Supported 00:25:07.424 Optional Asynchronous Events Supported 00:25:07.424 Namespace Attribute Notices: Not Supported 00:25:07.424 Firmware Activation Notices: Not Supported 00:25:07.424 ANA Change Notices: Not Supported 00:25:07.424 PLE Aggregate Log Change Notices: Not Supported 00:25:07.424 LBA Status Info Alert Notices: Not Supported 00:25:07.424 EGE Aggregate Log Change Notices: Not Supported 00:25:07.424 Normal NVM Subsystem Shutdown event: Not Supported 00:25:07.424 Zone Descriptor Change Notices: Not Supported 00:25:07.424 Discovery Log Change Notices: Supported 00:25:07.424 Controller Attributes 00:25:07.424 128-bit Host Identifier: Not Supported 00:25:07.424 Non-Operational Permissive Mode: Not Supported 00:25:07.424 NVM Sets: Not Supported 00:25:07.424 Read Recovery Levels: Not Supported 00:25:07.424 Endurance Groups: Not Supported 00:25:07.424 Predictable Latency Mode: Not Supported 00:25:07.424 Traffic Based Keep ALive: Not Supported 00:25:07.424 Namespace Granularity: Not Supported 00:25:07.424 SQ Associations: Not Supported 00:25:07.424 UUID List: Not Supported 00:25:07.424 Multi-Domain Subsystem: Not Supported 00:25:07.424 Fixed Capacity Management: Not Supported 00:25:07.424 Variable Capacity Management: Not Supported 00:25:07.424 Delete Endurance Group: Not Supported 00:25:07.424 Delete NVM Set: Not Supported 00:25:07.424 Extended LBA Formats Supported: Not Supported 00:25:07.424 Flexible Data Placement Supported: Not Supported 00:25:07.424 00:25:07.424 Controller Memory Buffer Support 00:25:07.424 ================================ 00:25:07.424 Supported: No 00:25:07.424 00:25:07.424 Persistent Memory Region Support 00:25:07.424 ================================ 00:25:07.424 Supported: No 00:25:07.424 00:25:07.424 Admin Command Set Attributes 00:25:07.424 ============================ 00:25:07.424 Security Send/Receive: Not Supported 00:25:07.424 Format NVM: Not Supported 00:25:07.424 Firmware Activate/Download: Not Supported 00:25:07.424 Namespace Management: Not Supported 00:25:07.424 Device Self-Test: Not Supported 00:25:07.424 Directives: Not Supported 00:25:07.424 NVMe-MI: Not Supported 00:25:07.424 Virtualization Management: Not Supported 00:25:07.424 Doorbell Buffer Config: Not Supported 00:25:07.424 Get LBA Status Capability: Not Supported 00:25:07.424 Command & Feature Lockdown Capability: Not Supported 00:25:07.424 Abort Command Limit: 1 00:25:07.424 Async Event Request Limit: 4 00:25:07.424 Number of Firmware Slots: N/A 00:25:07.424 Firmware Slot 1 Read-Only: N/A 00:25:07.424 Firmware Activation Without Reset: N/A 00:25:07.424 Multiple Update Detection Support: N/A 00:25:07.424 Firmware Update Granularity: No Information Provided 00:25:07.424 Per-Namespace SMART Log: No 00:25:07.424 Asymmetric Namespace Access Log Page: Not Supported 00:25:07.424 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:07.424 Command Effects Log Page: Not Supported 00:25:07.424 Get Log Page Extended Data: Supported 00:25:07.424 Telemetry Log Pages: Not Supported 00:25:07.424 Persistent Event Log Pages: Not Supported 00:25:07.424 Supported Log Pages Log Page: May Support 00:25:07.424 Commands Supported & Effects Log Page: Not Supported 00:25:07.424 Feature Identifiers & Effects Log Page:May Support 00:25:07.424 NVMe-MI Commands & Effects Log Page: May Support 00:25:07.424 Data Area 4 for Telemetry Log: Not Supported 00:25:07.424 Error Log Page Entries Supported: 128 00:25:07.424 Keep Alive: Not Supported 00:25:07.424 00:25:07.424 NVM Command Set Attributes 00:25:07.424 ========================== 00:25:07.424 Submission Queue Entry Size 00:25:07.424 Max: 1 00:25:07.424 Min: 1 00:25:07.424 Completion Queue Entry Size 00:25:07.424 Max: 1 00:25:07.424 Min: 1 00:25:07.424 Number of Namespaces: 0 00:25:07.424 Compare Command: Not Supported 00:25:07.424 Write Uncorrectable Command: Not Supported 00:25:07.424 Dataset Management Command: Not Supported 00:25:07.424 Write Zeroes Command: Not Supported 00:25:07.424 Set Features Save Field: Not Supported 00:25:07.424 Reservations: Not Supported 00:25:07.424 Timestamp: Not Supported 00:25:07.424 Copy: Not Supported 00:25:07.424 Volatile Write Cache: Not Present 00:25:07.424 Atomic Write Unit (Normal): 1 00:25:07.424 Atomic Write Unit (PFail): 1 00:25:07.424 Atomic Compare & Write Unit: 1 00:25:07.424 Fused Compare & Write: Supported 00:25:07.424 Scatter-Gather List 00:25:07.424 SGL Command Set: Supported 00:25:07.424 SGL Keyed: Supported 00:25:07.424 SGL Bit Bucket Descriptor: Not Supported 00:25:07.424 SGL Metadata Pointer: Not Supported 00:25:07.424 Oversized SGL: Not Supported 00:25:07.424 SGL Metadata Address: Not Supported 00:25:07.424 SGL Offset: Supported 00:25:07.424 Transport SGL Data Block: Not Supported 00:25:07.424 Replay Protected Memory Block: Not Supported 00:25:07.424 00:25:07.424 Firmware Slot Information 00:25:07.424 ========================= 00:25:07.424 Active slot: 0 00:25:07.424 00:25:07.424 00:25:07.424 Error Log 00:25:07.424 ========= 00:25:07.424 00:25:07.424 Active Namespaces 00:25:07.424 ================= 00:25:07.424 Discovery Log Page 00:25:07.424 ================== 00:25:07.424 Generation Counter: 2 00:25:07.424 Number of Records: 2 00:25:07.424 Record Format: 0 00:25:07.424 00:25:07.424 Discovery Log Entry 0 00:25:07.424 ---------------------- 00:25:07.424 Transport Type: 3 (TCP) 00:25:07.424 Address Family: 1 (IPv4) 00:25:07.424 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:07.424 Entry Flags: 00:25:07.424 Duplicate Returned Information: 1 00:25:07.424 Explicit Persistent Connection Support for Discovery: 1 00:25:07.424 Transport Requirements: 00:25:07.424 Secure Channel: Not Required 00:25:07.424 Port ID: 0 (0x0000) 00:25:07.424 Controller ID: 65535 (0xffff) 00:25:07.424 Admin Max SQ Size: 128 00:25:07.424 Transport Service Identifier: 4420 00:25:07.424 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:07.424 Transport Address: 10.0.0.2 00:25:07.424 Discovery Log Entry 1 00:25:07.424 ---------------------- 00:25:07.424 Transport Type: 3 (TCP) 00:25:07.424 Address Family: 1 (IPv4) 00:25:07.424 Subsystem Type: 2 (NVM Subsystem) 00:25:07.424 Entry Flags: 00:25:07.424 Duplicate Returned Information: 0 00:25:07.424 Explicit Persistent Connection Support for Discovery: 0 00:25:07.424 Transport Requirements: 00:25:07.424 Secure Channel: Not Required 00:25:07.424 Port ID: 0 (0x0000) 00:25:07.424 Controller ID: 65535 (0xffff) 00:25:07.424 Admin Max SQ Size: 128 00:25:07.424 Transport Service Identifier: 4420 00:25:07.424 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:07.424 Transport Address: 10.0.0.2 [2024-12-05 13:29:29.785199] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:07.424 [2024-12-05 13:29:29.785210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95100) on tqpair=0x1a33550 00:25:07.424 [2024-12-05 13:29:29.785217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-12-05 13:29:29.785223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95280) on tqpair=0x1a33550 00:25:07.424 [2024-12-05 13:29:29.785227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.424 [2024-12-05 13:29:29.785232] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95400) on tqpair=0x1a33550 00:25:07.424 [2024-12-05 13:29:29.785237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-12-05 13:29:29.785242] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.785246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.425 [2024-12-05 13:29:29.785256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785263] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.425 [2024-12-05 13:29:29.785271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.785284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.425 [2024-12-05 13:29:29.785401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.785407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.785411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.785422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785429] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.425 [2024-12-05 13:29:29.785436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.785449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.425 [2024-12-05 13:29:29.785651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.785657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.785662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.785671] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:07.425 [2024-12-05 13:29:29.785675] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:07.425 [2024-12-05 13:29:29.785687] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785691] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.425 [2024-12-05 13:29:29.785701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.785712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.425 [2024-12-05 13:29:29.785894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.785901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.785904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.785918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.785925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.425 [2024-12-05 13:29:29.785932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.785942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.425 [2024-12-05 13:29:29.786158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.786164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.786167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.786181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.425 [2024-12-05 13:29:29.786195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.786205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.425 [2024-12-05 13:29:29.786457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.786463] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.786467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.786481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786488] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.425 [2024-12-05 13:29:29.786495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.786505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.425 [2024-12-05 13:29:29.786710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.786716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.786720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.786735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.786743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a33550) 00:25:07.425 [2024-12-05 13:29:29.786750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.786760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a95580, cid 3, qid 0 00:25:07.425 [2024-12-05 13:29:29.789871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.789880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.789883] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.789887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a95580) on tqpair=0x1a33550 00:25:07.425 [2024-12-05 13:29:29.789895] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:25:07.425 00:25:07.425 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:07.425 [2024-12-05 13:29:29.828838] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:25:07.425 [2024-12-05 13:29:29.828888] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1032774 ] 00:25:07.425 [2024-12-05 13:29:29.883956] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:07.425 [2024-12-05 13:29:29.884002] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:07.425 [2024-12-05 13:29:29.884007] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:07.425 [2024-12-05 13:29:29.884022] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:07.425 [2024-12-05 13:29:29.884030] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:07.425 [2024-12-05 13:29:29.884613] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:07.425 [2024-12-05 13:29:29.884641] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe0f550 0 00:25:07.425 [2024-12-05 13:29:29.894874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:07.425 [2024-12-05 13:29:29.894887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:07.425 [2024-12-05 13:29:29.894891] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:07.425 [2024-12-05 13:29:29.894895] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:07.425 [2024-12-05 13:29:29.894924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.894929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.894933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.425 [2024-12-05 13:29:29.894945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:07.425 [2024-12-05 13:29:29.894963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.425 [2024-12-05 13:29:29.902874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.902884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.902892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.902896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.425 [2024-12-05 13:29:29.902908] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:07.425 [2024-12-05 13:29:29.902915] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:07.425 [2024-12-05 13:29:29.902921] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:07.425 [2024-12-05 13:29:29.902933] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.902938] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.425 [2024-12-05 13:29:29.902941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.425 [2024-12-05 13:29:29.902949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.425 [2024-12-05 13:29:29.902962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.425 [2024-12-05 13:29:29.903142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.425 [2024-12-05 13:29:29.903149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.425 [2024-12-05 13:29:29.903153] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.903164] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:07.426 [2024-12-05 13:29:29.903171] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:07.426 [2024-12-05 13:29:29.903178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903186] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.903192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.426 [2024-12-05 13:29:29.903203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.426 [2024-12-05 13:29:29.903417] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.426 [2024-12-05 13:29:29.903424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.426 [2024-12-05 13:29:29.903428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.903437] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:07.426 [2024-12-05 13:29:29.903445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:07.426 [2024-12-05 13:29:29.903451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903455] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.903466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.426 [2024-12-05 13:29:29.903476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.426 [2024-12-05 13:29:29.903721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.426 [2024-12-05 13:29:29.903728] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.426 [2024-12-05 13:29:29.903731] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903737] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.903742] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:07.426 [2024-12-05 13:29:29.903752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.903766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.426 [2024-12-05 13:29:29.903776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.426 [2024-12-05 13:29:29.903974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.426 [2024-12-05 13:29:29.903981] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.426 [2024-12-05 13:29:29.903984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.903988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.903993] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:07.426 [2024-12-05 13:29:29.903997] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:07.426 [2024-12-05 13:29:29.904005] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:07.426 [2024-12-05 13:29:29.904113] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:07.426 [2024-12-05 13:29:29.904118] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:07.426 [2024-12-05 13:29:29.904125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904129] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.904140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.426 [2024-12-05 13:29:29.904151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.426 [2024-12-05 13:29:29.904343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.426 [2024-12-05 13:29:29.904350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.426 [2024-12-05 13:29:29.904353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.904362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:07.426 [2024-12-05 13:29:29.904371] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.904385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.426 [2024-12-05 13:29:29.904395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.426 [2024-12-05 13:29:29.904619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.426 [2024-12-05 13:29:29.904625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.426 [2024-12-05 13:29:29.904630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.904639] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:07.426 [2024-12-05 13:29:29.904644] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:07.426 [2024-12-05 13:29:29.904652] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:07.426 [2024-12-05 13:29:29.904661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:07.426 [2024-12-05 13:29:29.904669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.904680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.426 [2024-12-05 13:29:29.904690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.426 [2024-12-05 13:29:29.904886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.426 [2024-12-05 13:29:29.904893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.426 [2024-12-05 13:29:29.904897] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904901] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=4096, cccid=0 00:25:07.426 [2024-12-05 13:29:29.904905] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71100) on tqpair(0xe0f550): expected_datao=0, payload_size=4096 00:25:07.426 [2024-12-05 13:29:29.904910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904917] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.904921] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.426 [2024-12-05 13:29:29.905079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.426 [2024-12-05 13:29:29.905082] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.905093] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:07.426 [2024-12-05 13:29:29.905098] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:07.426 [2024-12-05 13:29:29.905103] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:07.426 [2024-12-05 13:29:29.905107] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:07.426 [2024-12-05 13:29:29.905112] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:07.426 [2024-12-05 13:29:29.905116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:07.426 [2024-12-05 13:29:29.905125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:07.426 [2024-12-05 13:29:29.905131] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905139] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.905148] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.426 [2024-12-05 13:29:29.905160] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.426 [2024-12-05 13:29:29.905377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.426 [2024-12-05 13:29:29.905383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.426 [2024-12-05 13:29:29.905387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.426 [2024-12-05 13:29:29.905397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe0f550) 00:25:07.426 [2024-12-05 13:29:29.905411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.426 [2024-12-05 13:29:29.905417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.426 [2024-12-05 13:29:29.905425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.905431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.427 [2024-12-05 13:29:29.905437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.905450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.427 [2024-12-05 13:29:29.905456] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.905469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.427 [2024-12-05 13:29:29.905474] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.905484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.905491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.905502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.427 [2024-12-05 13:29:29.905513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71100, cid 0, qid 0 00:25:07.427 [2024-12-05 13:29:29.905519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71280, cid 1, qid 0 00:25:07.427 [2024-12-05 13:29:29.905523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71400, cid 2, qid 0 00:25:07.427 [2024-12-05 13:29:29.905528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71580, cid 3, qid 0 00:25:07.427 [2024-12-05 13:29:29.905533] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71700, cid 4, qid 0 00:25:07.427 [2024-12-05 13:29:29.905751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.427 [2024-12-05 13:29:29.905758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.427 [2024-12-05 13:29:29.905763] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71700) on tqpair=0xe0f550 00:25:07.427 [2024-12-05 13:29:29.905772] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:07.427 [2024-12-05 13:29:29.905777] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.905787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.905794] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.905800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.905808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.905814] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:07.427 [2024-12-05 13:29:29.905825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71700, cid 4, qid 0 00:25:07.427 [2024-12-05 13:29:29.906017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.427 [2024-12-05 13:29:29.906024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.427 [2024-12-05 13:29:29.906028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71700) on tqpair=0xe0f550 00:25:07.427 [2024-12-05 13:29:29.906096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.906106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.906113] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.906124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.427 [2024-12-05 13:29:29.906134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71700, cid 4, qid 0 00:25:07.427 [2024-12-05 13:29:29.906357] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.427 [2024-12-05 13:29:29.906364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.427 [2024-12-05 13:29:29.906367] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906371] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=4096, cccid=4 00:25:07.427 [2024-12-05 13:29:29.906375] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71700) on tqpair(0xe0f550): expected_datao=0, payload_size=4096 00:25:07.427 [2024-12-05 13:29:29.906380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906387] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906390] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906540] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.427 [2024-12-05 13:29:29.906547] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.427 [2024-12-05 13:29:29.906550] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906554] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71700) on tqpair=0xe0f550 00:25:07.427 [2024-12-05 13:29:29.906565] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:07.427 [2024-12-05 13:29:29.906580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.906589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.906596] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.906600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.906607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.427 [2024-12-05 13:29:29.906618] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71700, cid 4, qid 0 00:25:07.427 [2024-12-05 13:29:29.910871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.427 [2024-12-05 13:29:29.910880] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.427 [2024-12-05 13:29:29.910884] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.910888] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=4096, cccid=4 00:25:07.427 [2024-12-05 13:29:29.910892] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71700) on tqpair(0xe0f550): expected_datao=0, payload_size=4096 00:25:07.427 [2024-12-05 13:29:29.910897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.910903] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.910907] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.910915] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.427 [2024-12-05 13:29:29.910921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.427 [2024-12-05 13:29:29.910925] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.910928] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71700) on tqpair=0xe0f550 00:25:07.427 [2024-12-05 13:29:29.910939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.910949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.910956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.910960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0f550) 00:25:07.427 [2024-12-05 13:29:29.910966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.427 [2024-12-05 13:29:29.910978] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71700, cid 4, qid 0 00:25:07.427 [2024-12-05 13:29:29.911146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.427 [2024-12-05 13:29:29.911153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.427 [2024-12-05 13:29:29.911157] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.911160] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=4096, cccid=4 00:25:07.427 [2024-12-05 13:29:29.911165] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71700) on tqpair(0xe0f550): expected_datao=0, payload_size=4096 00:25:07.427 [2024-12-05 13:29:29.911169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.911186] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.911190] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.911355] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.427 [2024-12-05 13:29:29.911361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.427 [2024-12-05 13:29:29.911368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.427 [2024-12-05 13:29:29.911372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71700) on tqpair=0xe0f550 00:25:07.427 [2024-12-05 13:29:29.911382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.911390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.911398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.911404] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.911410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:07.427 [2024-12-05 13:29:29.911415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:07.428 [2024-12-05 13:29:29.911420] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:07.428 [2024-12-05 13:29:29.911425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:07.428 [2024-12-05 13:29:29.911431] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:07.428 [2024-12-05 13:29:29.911445] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.911455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.911462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.911476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:07.428 [2024-12-05 13:29:29.911489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71700, cid 4, qid 0 00:25:07.428 [2024-12-05 13:29:29.911495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71880, cid 5, qid 0 00:25:07.428 [2024-12-05 13:29:29.911693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.428 [2024-12-05 13:29:29.911699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.428 [2024-12-05 13:29:29.911703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911707] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71700) on tqpair=0xe0f550 00:25:07.428 [2024-12-05 13:29:29.911713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.428 [2024-12-05 13:29:29.911719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.428 [2024-12-05 13:29:29.911723] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71880) on tqpair=0xe0f550 00:25:07.428 [2024-12-05 13:29:29.911735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.911746] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.911758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71880, cid 5, qid 0 00:25:07.428 [2024-12-05 13:29:29.911928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.428 [2024-12-05 13:29:29.911934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.428 [2024-12-05 13:29:29.911938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71880) on tqpair=0xe0f550 00:25:07.428 [2024-12-05 13:29:29.911951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.911955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.911961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.911971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71880, cid 5, qid 0 00:25:07.428 [2024-12-05 13:29:29.912196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.428 [2024-12-05 13:29:29.912203] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.428 [2024-12-05 13:29:29.912206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71880) on tqpair=0xe0f550 00:25:07.428 [2024-12-05 13:29:29.912219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.912230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.912239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71880, cid 5, qid 0 00:25:07.428 [2024-12-05 13:29:29.912423] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.428 [2024-12-05 13:29:29.912430] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.428 [2024-12-05 13:29:29.912433] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912437] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71880) on tqpair=0xe0f550 00:25:07.428 [2024-12-05 13:29:29.912451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.912462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.912469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.912479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.912487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.912497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.912504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe0f550) 00:25:07.428 [2024-12-05 13:29:29.912514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.428 [2024-12-05 13:29:29.912525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71880, cid 5, qid 0 00:25:07.428 [2024-12-05 13:29:29.912534] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71700, cid 4, qid 0 00:25:07.428 [2024-12-05 13:29:29.912539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71a00, cid 6, qid 0 00:25:07.428 [2024-12-05 13:29:29.912544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71b80, cid 7, qid 0 00:25:07.428 [2024-12-05 13:29:29.912831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.428 [2024-12-05 13:29:29.912837] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.428 [2024-12-05 13:29:29.912840] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912844] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=8192, cccid=5 00:25:07.428 [2024-12-05 13:29:29.912849] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71880) on tqpair(0xe0f550): expected_datao=0, payload_size=8192 00:25:07.428 [2024-12-05 13:29:29.912853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912899] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912904] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.428 [2024-12-05 13:29:29.912915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.428 [2024-12-05 13:29:29.912919] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912922] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=512, cccid=4 00:25:07.428 [2024-12-05 13:29:29.912927] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71700) on tqpair(0xe0f550): expected_datao=0, payload_size=512 00:25:07.428 [2024-12-05 13:29:29.912931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912938] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912941] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.428 [2024-12-05 13:29:29.912953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.428 [2024-12-05 13:29:29.912956] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912959] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=512, cccid=6 00:25:07.428 [2024-12-05 13:29:29.912964] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71a00) on tqpair(0xe0f550): expected_datao=0, payload_size=512 00:25:07.428 [2024-12-05 13:29:29.912968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912975] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912978] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912984] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:07.428 [2024-12-05 13:29:29.912990] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:07.428 [2024-12-05 13:29:29.912993] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:07.428 [2024-12-05 13:29:29.912997] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe0f550): datao=0, datal=4096, cccid=7 00:25:07.429 [2024-12-05 13:29:29.913001] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe71b80) on tqpair(0xe0f550): expected_datao=0, payload_size=4096 00:25:07.429 [2024-12-05 13:29:29.913005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.429 [2024-12-05 13:29:29.913016] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:07.429 [2024-12-05 13:29:29.913020] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:07.429 [2024-12-05 13:29:29.913032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.429 [2024-12-05 13:29:29.913038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.429 [2024-12-05 13:29:29.913041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.429 [2024-12-05 13:29:29.913047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71880) on tqpair=0xe0f550 00:25:07.429 [2024-12-05 13:29:29.913059] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.429 [2024-12-05 13:29:29.913065] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.429 [2024-12-05 13:29:29.913069] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.429 [2024-12-05 13:29:29.913072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71700) on tqpair=0xe0f550 00:25:07.429 [2024-12-05 13:29:29.913082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.429 [2024-12-05 13:29:29.913088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.429 [2024-12-05 13:29:29.913092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.429 [2024-12-05 13:29:29.913096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71a00) on tqpair=0xe0f550 00:25:07.429 [2024-12-05 13:29:29.913103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.429 [2024-12-05 13:29:29.913108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.429 [2024-12-05 13:29:29.913112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.429 [2024-12-05 13:29:29.913116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71b80) on tqpair=0xe0f550 00:25:07.429 ===================================================== 00:25:07.429 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.429 ===================================================== 00:25:07.429 Controller Capabilities/Features 00:25:07.429 ================================ 00:25:07.429 Vendor ID: 8086 00:25:07.429 Subsystem Vendor ID: 8086 00:25:07.429 Serial Number: SPDK00000000000001 00:25:07.429 Model Number: SPDK bdev Controller 00:25:07.429 Firmware Version: 25.01 00:25:07.429 Recommended Arb Burst: 6 00:25:07.429 IEEE OUI Identifier: e4 d2 5c 00:25:07.429 Multi-path I/O 00:25:07.429 May have multiple subsystem ports: Yes 00:25:07.429 May have multiple controllers: Yes 00:25:07.429 Associated with SR-IOV VF: No 00:25:07.429 Max Data Transfer Size: 131072 00:25:07.429 Max Number of Namespaces: 32 00:25:07.429 Max Number of I/O Queues: 127 00:25:07.429 NVMe Specification Version (VS): 1.3 00:25:07.429 NVMe Specification Version (Identify): 1.3 00:25:07.429 Maximum Queue Entries: 128 00:25:07.429 Contiguous Queues Required: Yes 00:25:07.429 Arbitration Mechanisms Supported 00:25:07.429 Weighted Round Robin: Not Supported 00:25:07.429 Vendor Specific: Not Supported 00:25:07.429 Reset Timeout: 15000 ms 00:25:07.429 Doorbell Stride: 4 bytes 00:25:07.429 NVM Subsystem Reset: Not Supported 00:25:07.429 Command Sets Supported 00:25:07.429 NVM Command Set: Supported 00:25:07.429 Boot Partition: Not Supported 00:25:07.429 Memory Page Size Minimum: 4096 bytes 00:25:07.429 Memory Page Size Maximum: 4096 bytes 00:25:07.429 Persistent Memory Region: Not Supported 00:25:07.429 Optional Asynchronous Events Supported 00:25:07.429 Namespace Attribute Notices: Supported 00:25:07.429 Firmware Activation Notices: Not Supported 00:25:07.429 ANA Change Notices: Not Supported 00:25:07.429 PLE Aggregate Log Change Notices: Not Supported 00:25:07.429 LBA Status Info Alert Notices: Not Supported 00:25:07.429 EGE Aggregate Log Change Notices: Not Supported 00:25:07.429 Normal NVM Subsystem Shutdown event: Not Supported 00:25:07.429 Zone Descriptor Change Notices: Not Supported 00:25:07.429 Discovery Log Change Notices: Not Supported 00:25:07.429 Controller Attributes 00:25:07.429 128-bit Host Identifier: Supported 00:25:07.429 Non-Operational Permissive Mode: Not Supported 00:25:07.429 NVM Sets: Not Supported 00:25:07.429 Read Recovery Levels: Not Supported 00:25:07.429 Endurance Groups: Not Supported 00:25:07.429 Predictable Latency Mode: Not Supported 00:25:07.429 Traffic Based Keep ALive: Not Supported 00:25:07.429 Namespace Granularity: Not Supported 00:25:07.429 SQ Associations: Not Supported 00:25:07.429 UUID List: Not Supported 00:25:07.429 Multi-Domain Subsystem: Not Supported 00:25:07.429 Fixed Capacity Management: Not Supported 00:25:07.429 Variable Capacity Management: Not Supported 00:25:07.429 Delete Endurance Group: Not Supported 00:25:07.429 Delete NVM Set: Not Supported 00:25:07.429 Extended LBA Formats Supported: Not Supported 00:25:07.429 Flexible Data Placement Supported: Not Supported 00:25:07.429 00:25:07.429 Controller Memory Buffer Support 00:25:07.429 ================================ 00:25:07.429 Supported: No 00:25:07.429 00:25:07.429 Persistent Memory Region Support 00:25:07.429 ================================ 00:25:07.429 Supported: No 00:25:07.429 00:25:07.429 Admin Command Set Attributes 00:25:07.429 ============================ 00:25:07.429 Security Send/Receive: Not Supported 00:25:07.429 Format NVM: Not Supported 00:25:07.429 Firmware Activate/Download: Not Supported 00:25:07.429 Namespace Management: Not Supported 00:25:07.429 Device Self-Test: Not Supported 00:25:07.429 Directives: Not Supported 00:25:07.429 NVMe-MI: Not Supported 00:25:07.429 Virtualization Management: Not Supported 00:25:07.429 Doorbell Buffer Config: Not Supported 00:25:07.429 Get LBA Status Capability: Not Supported 00:25:07.429 Command & Feature Lockdown Capability: Not Supported 00:25:07.429 Abort Command Limit: 4 00:25:07.429 Async Event Request Limit: 4 00:25:07.429 Number of Firmware Slots: N/A 00:25:07.429 Firmware Slot 1 Read-Only: N/A 00:25:07.429 Firmware Activation Without Reset: N/A 00:25:07.429 Multiple Update Detection Support: N/A 00:25:07.429 Firmware Update Granularity: No Information Provided 00:25:07.429 Per-Namespace SMART Log: No 00:25:07.429 Asymmetric Namespace Access Log Page: Not Supported 00:25:07.429 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:07.429 Command Effects Log Page: Supported 00:25:07.429 Get Log Page Extended Data: Supported 00:25:07.429 Telemetry Log Pages: Not Supported 00:25:07.429 Persistent Event Log Pages: Not Supported 00:25:07.429 Supported Log Pages Log Page: May Support 00:25:07.429 Commands Supported & Effects Log Page: Not Supported 00:25:07.429 Feature Identifiers & Effects Log Page:May Support 00:25:07.429 NVMe-MI Commands & Effects Log Page: May Support 00:25:07.429 Data Area 4 for Telemetry Log: Not Supported 00:25:07.429 Error Log Page Entries Supported: 128 00:25:07.429 Keep Alive: Supported 00:25:07.429 Keep Alive Granularity: 10000 ms 00:25:07.429 00:25:07.429 NVM Command Set Attributes 00:25:07.429 ========================== 00:25:07.429 Submission Queue Entry Size 00:25:07.429 Max: 64 00:25:07.429 Min: 64 00:25:07.429 Completion Queue Entry Size 00:25:07.429 Max: 16 00:25:07.429 Min: 16 00:25:07.429 Number of Namespaces: 32 00:25:07.429 Compare Command: Supported 00:25:07.429 Write Uncorrectable Command: Not Supported 00:25:07.429 Dataset Management Command: Supported 00:25:07.429 Write Zeroes Command: Supported 00:25:07.429 Set Features Save Field: Not Supported 00:25:07.429 Reservations: Supported 00:25:07.429 Timestamp: Not Supported 00:25:07.429 Copy: Supported 00:25:07.429 Volatile Write Cache: Present 00:25:07.429 Atomic Write Unit (Normal): 1 00:25:07.429 Atomic Write Unit (PFail): 1 00:25:07.429 Atomic Compare & Write Unit: 1 00:25:07.429 Fused Compare & Write: Supported 00:25:07.429 Scatter-Gather List 00:25:07.429 SGL Command Set: Supported 00:25:07.429 SGL Keyed: Supported 00:25:07.429 SGL Bit Bucket Descriptor: Not Supported 00:25:07.429 SGL Metadata Pointer: Not Supported 00:25:07.429 Oversized SGL: Not Supported 00:25:07.429 SGL Metadata Address: Not Supported 00:25:07.429 SGL Offset: Supported 00:25:07.429 Transport SGL Data Block: Not Supported 00:25:07.429 Replay Protected Memory Block: Not Supported 00:25:07.429 00:25:07.429 Firmware Slot Information 00:25:07.429 ========================= 00:25:07.429 Active slot: 1 00:25:07.429 Slot 1 Firmware Revision: 25.01 00:25:07.429 00:25:07.429 00:25:07.429 Commands Supported and Effects 00:25:07.429 ============================== 00:25:07.429 Admin Commands 00:25:07.429 -------------- 00:25:07.429 Get Log Page (02h): Supported 00:25:07.429 Identify (06h): Supported 00:25:07.429 Abort (08h): Supported 00:25:07.429 Set Features (09h): Supported 00:25:07.429 Get Features (0Ah): Supported 00:25:07.429 Asynchronous Event Request (0Ch): Supported 00:25:07.429 Keep Alive (18h): Supported 00:25:07.429 I/O Commands 00:25:07.429 ------------ 00:25:07.429 Flush (00h): Supported LBA-Change 00:25:07.429 Write (01h): Supported LBA-Change 00:25:07.429 Read (02h): Supported 00:25:07.429 Compare (05h): Supported 00:25:07.430 Write Zeroes (08h): Supported LBA-Change 00:25:07.430 Dataset Management (09h): Supported LBA-Change 00:25:07.430 Copy (19h): Supported LBA-Change 00:25:07.430 00:25:07.430 Error Log 00:25:07.430 ========= 00:25:07.430 00:25:07.430 Arbitration 00:25:07.430 =========== 00:25:07.430 Arbitration Burst: 1 00:25:07.430 00:25:07.430 Power Management 00:25:07.430 ================ 00:25:07.430 Number of Power States: 1 00:25:07.430 Current Power State: Power State #0 00:25:07.430 Power State #0: 00:25:07.430 Max Power: 0.00 W 00:25:07.430 Non-Operational State: Operational 00:25:07.430 Entry Latency: Not Reported 00:25:07.430 Exit Latency: Not Reported 00:25:07.430 Relative Read Throughput: 0 00:25:07.430 Relative Read Latency: 0 00:25:07.430 Relative Write Throughput: 0 00:25:07.430 Relative Write Latency: 0 00:25:07.430 Idle Power: Not Reported 00:25:07.430 Active Power: Not Reported 00:25:07.430 Non-Operational Permissive Mode: Not Supported 00:25:07.430 00:25:07.430 Health Information 00:25:07.430 ================== 00:25:07.430 Critical Warnings: 00:25:07.430 Available Spare Space: OK 00:25:07.430 Temperature: OK 00:25:07.430 Device Reliability: OK 00:25:07.430 Read Only: No 00:25:07.430 Volatile Memory Backup: OK 00:25:07.430 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:07.430 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:07.430 Available Spare: 0% 00:25:07.430 Available Spare Threshold: 0% 00:25:07.430 Life Percentage Used:[2024-12-05 13:29:29.913210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.913216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xe0f550) 00:25:07.430 [2024-12-05 13:29:29.913223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.430 [2024-12-05 13:29:29.913234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71b80, cid 7, qid 0 00:25:07.430 [2024-12-05 13:29:29.913427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.430 [2024-12-05 13:29:29.913434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.430 [2024-12-05 13:29:29.913437] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.913441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71b80) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.913472] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:07.430 [2024-12-05 13:29:29.913482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71100) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.913488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.430 [2024-12-05 13:29:29.913493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71280) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.913498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.430 [2024-12-05 13:29:29.913503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71400) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.913507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.430 [2024-12-05 13:29:29.913512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71580) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.913517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:07.430 [2024-12-05 13:29:29.913525] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.913529] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.913532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0f550) 00:25:07.430 [2024-12-05 13:29:29.913539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.430 [2024-12-05 13:29:29.913551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71580, cid 3, qid 0 00:25:07.430 [2024-12-05 13:29:29.913711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.430 [2024-12-05 13:29:29.913718] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.430 [2024-12-05 13:29:29.913721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.913725] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71580) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.913732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.913736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.913739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0f550) 00:25:07.430 [2024-12-05 13:29:29.913746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.430 [2024-12-05 13:29:29.913759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71580, cid 3, qid 0 00:25:07.430 [2024-12-05 13:29:29.913989] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.430 [2024-12-05 13:29:29.913996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.430 [2024-12-05 13:29:29.914000] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914004] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71580) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.914009] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:07.430 [2024-12-05 13:29:29.914013] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:07.430 [2024-12-05 13:29:29.914023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0f550) 00:25:07.430 [2024-12-05 13:29:29.914037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.430 [2024-12-05 13:29:29.914047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71580, cid 3, qid 0 00:25:07.430 [2024-12-05 13:29:29.914263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.430 [2024-12-05 13:29:29.914269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.430 [2024-12-05 13:29:29.914273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71580) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.914287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0f550) 00:25:07.430 [2024-12-05 13:29:29.914301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.430 [2024-12-05 13:29:29.914311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71580, cid 3, qid 0 00:25:07.430 [2024-12-05 13:29:29.914506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.430 [2024-12-05 13:29:29.914513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.430 [2024-12-05 13:29:29.914516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914520] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71580) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.914530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914534] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914537] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0f550) 00:25:07.430 [2024-12-05 13:29:29.914544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.430 [2024-12-05 13:29:29.914556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71580, cid 3, qid 0 00:25:07.430 [2024-12-05 13:29:29.914777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.430 [2024-12-05 13:29:29.914783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.430 [2024-12-05 13:29:29.914787] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71580) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.914800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914804] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.914808] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe0f550) 00:25:07.430 [2024-12-05 13:29:29.914815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.430 [2024-12-05 13:29:29.914824] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe71580, cid 3, qid 0 00:25:07.430 [2024-12-05 13:29:29.918872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:07.430 [2024-12-05 13:29:29.918881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:07.430 [2024-12-05 13:29:29.918884] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:07.430 [2024-12-05 13:29:29.918888] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe71580) on tqpair=0xe0f550 00:25:07.430 [2024-12-05 13:29:29.918896] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:07.430 0% 00:25:07.430 Data Units Read: 0 00:25:07.430 Data Units Written: 0 00:25:07.430 Host Read Commands: 0 00:25:07.430 Host Write Commands: 0 00:25:07.430 Controller Busy Time: 0 minutes 00:25:07.430 Power Cycles: 0 00:25:07.430 Power On Hours: 0 hours 00:25:07.430 Unsafe Shutdowns: 0 00:25:07.430 Unrecoverable Media Errors: 0 00:25:07.430 Lifetime Error Log Entries: 0 00:25:07.430 Warning Temperature Time: 0 minutes 00:25:07.430 Critical Temperature Time: 0 minutes 00:25:07.430 00:25:07.430 Number of Queues 00:25:07.430 ================ 00:25:07.430 Number of I/O Submission Queues: 127 00:25:07.430 Number of I/O Completion Queues: 127 00:25:07.430 00:25:07.430 Active Namespaces 00:25:07.430 ================= 00:25:07.430 Namespace ID:1 00:25:07.430 Error Recovery Timeout: Unlimited 00:25:07.430 Command Set Identifier: NVM (00h) 00:25:07.430 Deallocate: Supported 00:25:07.430 Deallocated/Unwritten Error: Not Supported 00:25:07.430 Deallocated Read Value: Unknown 00:25:07.430 Deallocate in Write Zeroes: Not Supported 00:25:07.431 Deallocated Guard Field: 0xFFFF 00:25:07.431 Flush: Supported 00:25:07.431 Reservation: Supported 00:25:07.431 Namespace Sharing Capabilities: Multiple Controllers 00:25:07.431 Size (in LBAs): 131072 (0GiB) 00:25:07.431 Capacity (in LBAs): 131072 (0GiB) 00:25:07.431 Utilization (in LBAs): 131072 (0GiB) 00:25:07.431 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:07.431 EUI64: ABCDEF0123456789 00:25:07.431 UUID: 3d429515-c739-4b68-9c4e-2222ffbb77fc 00:25:07.431 Thin Provisioning: Not Supported 00:25:07.431 Per-NS Atomic Units: Yes 00:25:07.431 Atomic Boundary Size (Normal): 0 00:25:07.431 Atomic Boundary Size (PFail): 0 00:25:07.431 Atomic Boundary Offset: 0 00:25:07.431 Maximum Single Source Range Length: 65535 00:25:07.431 Maximum Copy Length: 65535 00:25:07.431 Maximum Source Range Count: 1 00:25:07.431 NGUID/EUI64 Never Reused: No 00:25:07.431 Namespace Write Protected: No 00:25:07.431 Number of LBA Formats: 1 00:25:07.431 Current LBA Format: LBA Format #00 00:25:07.431 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:07.431 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:07.431 13:29:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:07.431 rmmod nvme_tcp 00:25:07.431 rmmod nvme_fabrics 00:25:07.692 rmmod nvme_keyring 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1032523 ']' 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1032523 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1032523 ']' 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1032523 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1032523 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1032523' 00:25:07.692 killing process with pid 1032523 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1032523 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1032523 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.692 13:29:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:10.237 00:25:10.237 real 0m12.354s 00:25:10.237 user 0m8.507s 00:25:10.237 sys 0m6.687s 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:10.237 ************************************ 00:25:10.237 END TEST nvmf_identify 00:25:10.237 ************************************ 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.237 ************************************ 00:25:10.237 START TEST nvmf_perf 00:25:10.237 ************************************ 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:10.237 * Looking for test storage... 00:25:10.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.237 --rc genhtml_branch_coverage=1 00:25:10.237 --rc genhtml_function_coverage=1 00:25:10.237 --rc genhtml_legend=1 00:25:10.237 --rc geninfo_all_blocks=1 00:25:10.237 --rc geninfo_unexecuted_blocks=1 00:25:10.237 00:25:10.237 ' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.237 --rc genhtml_branch_coverage=1 00:25:10.237 --rc genhtml_function_coverage=1 00:25:10.237 --rc genhtml_legend=1 00:25:10.237 --rc geninfo_all_blocks=1 00:25:10.237 --rc geninfo_unexecuted_blocks=1 00:25:10.237 00:25:10.237 ' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.237 --rc genhtml_branch_coverage=1 00:25:10.237 --rc genhtml_function_coverage=1 00:25:10.237 --rc genhtml_legend=1 00:25:10.237 --rc geninfo_all_blocks=1 00:25:10.237 --rc geninfo_unexecuted_blocks=1 00:25:10.237 00:25:10.237 ' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:10.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:10.237 --rc genhtml_branch_coverage=1 00:25:10.237 --rc genhtml_function_coverage=1 00:25:10.237 --rc genhtml_legend=1 00:25:10.237 --rc geninfo_all_blocks=1 00:25:10.237 --rc geninfo_unexecuted_blocks=1 00:25:10.237 00:25:10.237 ' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.237 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:10.238 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:10.238 13:29:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:18.372 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:18.373 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:18.373 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:18.373 Found net devices under 0000:31:00.0: cvl_0_0 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:18.373 Found net devices under 0000:31:00.1: cvl_0_1 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:18.373 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.634 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.634 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.634 13:29:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:18.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:25:18.634 00:25:18.634 --- 10.0.0.2 ping statistics --- 00:25:18.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.634 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:25:18.634 00:25:18.634 --- 10.0.0.1 ping statistics --- 00:25:18.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.634 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1037555 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1037555 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1037555 ']' 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:18.634 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:18.634 [2024-12-05 13:29:41.116145] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:25:18.634 [2024-12-05 13:29:41.116199] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.894 [2024-12-05 13:29:41.201693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:18.894 [2024-12-05 13:29:41.238174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.894 [2024-12-05 13:29:41.238207] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.895 [2024-12-05 13:29:41.238215] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.895 [2024-12-05 13:29:41.238221] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.895 [2024-12-05 13:29:41.238227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.895 [2024-12-05 13:29:41.239835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.895 [2024-12-05 13:29:41.239982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.895 [2024-12-05 13:29:41.240044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.895 [2024-12-05 13:29:41.240045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:19.465 13:29:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:20.036 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:20.036 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:20.319 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:20.320 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:20.320 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:20.320 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:20.320 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:20.320 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:20.320 13:29:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:20.579 [2024-12-05 13:29:42.990970] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.580 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:20.840 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:20.840 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:20.840 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:20.840 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:21.100 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.361 [2024-12-05 13:29:43.733841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.361 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:21.622 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:21.622 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:21.622 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:21.622 13:29:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:23.002 Initializing NVMe Controllers 00:25:23.002 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:23.002 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:23.002 Initialization complete. Launching workers. 00:25:23.002 ======================================================== 00:25:23.002 Latency(us) 00:25:23.002 Device Information : IOPS MiB/s Average min max 00:25:23.002 PCIE (0000:65:00.0) NSID 1 from core 0: 79244.43 309.55 402.98 13.25 4941.19 00:25:23.002 ======================================================== 00:25:23.002 Total : 79244.43 309.55 402.98 13.25 4941.19 00:25:23.002 00:25:23.002 13:29:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:24.382 Initializing NVMe Controllers 00:25:24.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:24.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:24.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:24.383 Initialization complete. Launching workers. 00:25:24.383 ======================================================== 00:25:24.383 Latency(us) 00:25:24.383 Device Information : IOPS MiB/s Average min max 00:25:24.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 103.00 0.40 9710.76 307.91 45913.28 00:25:24.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 44.00 0.17 23376.23 5978.17 48886.29 00:25:24.383 ======================================================== 00:25:24.383 Total : 147.00 0.57 13801.11 307.91 48886.29 00:25:24.383 00:25:24.383 13:29:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:25.765 Initializing NVMe Controllers 00:25:25.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:25.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:25.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:25.765 Initialization complete. Launching workers. 00:25:25.765 ======================================================== 00:25:25.765 Latency(us) 00:25:25.765 Device Information : IOPS MiB/s Average min max 00:25:25.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10357.69 40.46 3089.72 401.28 8342.35 00:25:25.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3781.89 14.77 8519.73 5878.96 16772.61 00:25:25.765 ======================================================== 00:25:25.765 Total : 14139.58 55.23 4542.07 401.28 16772.61 00:25:25.765 00:25:25.765 13:29:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:25.765 13:29:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:25.765 13:29:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:28.309 Initializing NVMe Controllers 00:25:28.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.309 Controller IO queue size 128, less than required. 00:25:28.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.309 Controller IO queue size 128, less than required. 00:25:28.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:28.309 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:28.309 Initialization complete. Launching workers. 00:25:28.309 ======================================================== 00:25:28.309 Latency(us) 00:25:28.309 Device Information : IOPS MiB/s Average min max 00:25:28.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1606.93 401.73 81727.20 51473.59 126300.50 00:25:28.309 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 605.97 151.49 224638.59 69636.45 343128.14 00:25:28.309 ======================================================== 00:25:28.309 Total : 2212.90 553.23 120861.54 51473.59 343128.14 00:25:28.309 00:25:28.309 13:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:28.309 No valid NVMe controllers or AIO or URING devices found 00:25:28.309 Initializing NVMe Controllers 00:25:28.309 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:28.309 Controller IO queue size 128, less than required. 00:25:28.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.309 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:28.309 Controller IO queue size 128, less than required. 00:25:28.309 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:28.309 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:28.309 WARNING: Some requested NVMe devices were skipped 00:25:28.309 13:29:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:30.851 Initializing NVMe Controllers 00:25:30.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.852 Controller IO queue size 128, less than required. 00:25:30.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.852 Controller IO queue size 128, less than required. 00:25:30.852 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:30.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.852 Initialization complete. Launching workers. 00:25:30.852 00:25:30.852 ==================== 00:25:30.852 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:30.852 TCP transport: 00:25:30.852 polls: 21750 00:25:30.852 idle_polls: 12160 00:25:30.852 sock_completions: 9590 00:25:30.852 nvme_completions: 6501 00:25:30.852 submitted_requests: 9834 00:25:30.852 queued_requests: 1 00:25:30.852 00:25:30.852 ==================== 00:25:30.852 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:30.852 TCP transport: 00:25:30.852 polls: 21421 00:25:30.852 idle_polls: 10999 00:25:30.852 sock_completions: 10422 00:25:30.852 nvme_completions: 6651 00:25:30.852 submitted_requests: 9968 00:25:30.852 queued_requests: 1 00:25:30.852 ======================================================== 00:25:30.852 Latency(us) 00:25:30.852 Device Information : IOPS MiB/s Average min max 00:25:30.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1622.23 405.56 80621.88 32990.28 156533.49 00:25:30.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1659.67 414.92 77800.91 38531.14 132104.35 00:25:30.852 ======================================================== 00:25:30.852 Total : 3281.90 820.47 79195.31 32990.28 156533.49 00:25:30.852 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:30.852 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:30.852 rmmod nvme_tcp 00:25:31.113 rmmod nvme_fabrics 00:25:31.113 rmmod nvme_keyring 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1037555 ']' 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1037555 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1037555 ']' 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1037555 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1037555 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1037555' 00:25:31.113 killing process with pid 1037555 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1037555 00:25:31.113 13:29:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1037555 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:33.026 13:29:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:35.571 00:25:35.571 real 0m25.203s 00:25:35.571 user 0m58.798s 00:25:35.571 sys 0m9.082s 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:35.571 ************************************ 00:25:35.571 END TEST nvmf_perf 00:25:35.571 ************************************ 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.571 ************************************ 00:25:35.571 START TEST nvmf_fio_host 00:25:35.571 ************************************ 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:35.571 * Looking for test storage... 00:25:35.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:35.571 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:35.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.572 --rc genhtml_branch_coverage=1 00:25:35.572 --rc genhtml_function_coverage=1 00:25:35.572 --rc genhtml_legend=1 00:25:35.572 --rc geninfo_all_blocks=1 00:25:35.572 --rc geninfo_unexecuted_blocks=1 00:25:35.572 00:25:35.572 ' 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:35.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.572 --rc genhtml_branch_coverage=1 00:25:35.572 --rc genhtml_function_coverage=1 00:25:35.572 --rc genhtml_legend=1 00:25:35.572 --rc geninfo_all_blocks=1 00:25:35.572 --rc geninfo_unexecuted_blocks=1 00:25:35.572 00:25:35.572 ' 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:35.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.572 --rc genhtml_branch_coverage=1 00:25:35.572 --rc genhtml_function_coverage=1 00:25:35.572 --rc genhtml_legend=1 00:25:35.572 --rc geninfo_all_blocks=1 00:25:35.572 --rc geninfo_unexecuted_blocks=1 00:25:35.572 00:25:35.572 ' 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:35.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:35.572 --rc genhtml_branch_coverage=1 00:25:35.572 --rc genhtml_function_coverage=1 00:25:35.572 --rc genhtml_legend=1 00:25:35.572 --rc geninfo_all_blocks=1 00:25:35.572 --rc geninfo_unexecuted_blocks=1 00:25:35.572 00:25:35.572 ' 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:35.572 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:35.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:35.573 13:29:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:43.770 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:43.770 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:43.770 Found net devices under 0000:31:00.0: cvl_0_0 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:43.770 Found net devices under 0000:31:00.1: cvl_0_1 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.770 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.771 13:30:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:43.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:43.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:25:43.771 00:25:43.771 --- 10.0.0.2 ping statistics --- 00:25:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.771 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:43.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:43.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:25:43.771 00:25:43.771 --- 10.0.0.1 ping statistics --- 00:25:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:43.771 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1045090 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1045090 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1045090 ']' 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:43.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:43.771 13:30:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.032 [2024-12-05 13:30:06.385183] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:25:44.032 [2024-12-05 13:30:06.385277] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.032 [2024-12-05 13:30:06.477234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:44.032 [2024-12-05 13:30:06.518462] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.032 [2024-12-05 13:30:06.518498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.032 [2024-12-05 13:30:06.518507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.032 [2024-12-05 13:30:06.518515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.032 [2024-12-05 13:30:06.518520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.032 [2024-12-05 13:30:06.520398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.032 [2024-12-05 13:30:06.520518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.032 [2024-12-05 13:30:06.520679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.032 [2024-12-05 13:30:06.520680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.976 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.976 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:44.976 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:44.976 [2024-12-05 13:30:07.324811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.976 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:44.976 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:44.976 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.976 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:45.236 Malloc1 00:25:45.236 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.236 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:45.496 13:30:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.757 [2024-12-05 13:30:08.108719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:45.757 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:46.043 13:30:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:46.305 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:46.305 fio-3.35 00:25:46.305 Starting 1 thread 00:25:48.849 00:25:48.849 test: (groupid=0, jobs=1): err= 0: pid=1046159: Thu Dec 5 13:30:11 2024 00:25:48.849 read: IOPS=13.8k, BW=54.0MiB/s (56.7MB/s)(108MiB/2004msec) 00:25:48.849 slat (usec): min=2, max=276, avg= 2.17, stdev= 2.36 00:25:48.849 clat (usec): min=3667, max=9840, avg=5093.65, stdev=421.15 00:25:48.849 lat (usec): min=3670, max=9847, avg=5095.83, stdev=421.44 00:25:48.849 clat percentiles (usec): 00:25:48.849 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4686], 20.00th=[ 4817], 00:25:48.849 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5080], 60.00th=[ 5145], 00:25:48.849 | 70.00th=[ 5276], 80.00th=[ 5342], 90.00th=[ 5473], 95.00th=[ 5604], 00:25:48.849 | 99.00th=[ 5997], 99.50th=[ 7373], 99.90th=[ 9372], 99.95th=[ 9634], 00:25:48.849 | 99.99th=[ 9765] 00:25:48.849 bw ( KiB/s): min=53736, max=55976, per=99.99%, avg=55340.00, stdev=1072.45, samples=4 00:25:48.849 iops : min=13434, max=13994, avg=13835.00, stdev=268.11, samples=4 00:25:48.849 write: IOPS=13.8k, BW=54.1MiB/s (56.7MB/s)(108MiB/2004msec); 0 zone resets 00:25:48.849 slat (usec): min=2, max=271, avg= 2.24, stdev= 1.81 00:25:48.849 clat (usec): min=2897, max=8578, avg=4114.32, stdev=381.06 00:25:48.849 lat (usec): min=2915, max=8585, avg=4116.56, stdev=381.45 00:25:48.849 clat percentiles (usec): 00:25:48.849 | 1.00th=[ 3392], 5.00th=[ 3654], 10.00th=[ 3752], 20.00th=[ 3884], 00:25:48.849 | 30.00th=[ 3949], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4146], 00:25:48.849 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4555], 00:25:48.849 | 99.00th=[ 5211], 99.50th=[ 6521], 99.90th=[ 8094], 99.95th=[ 8455], 00:25:48.849 | 99.99th=[ 8586] 00:25:48.849 bw ( KiB/s): min=54120, max=55872, per=99.95%, avg=55338.00, stdev=817.03, samples=4 00:25:48.849 iops : min=13530, max=13968, avg=13834.50, stdev=204.26, samples=4 00:25:48.849 lat (msec) : 4=17.83%, 10=82.17% 00:25:48.849 cpu : usr=76.68%, sys=22.47%, ctx=32, majf=0, minf=16 00:25:48.849 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:48.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:48.849 issued rwts: total=27727,27739,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.849 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:48.849 00:25:48.849 Run status group 0 (all jobs): 00:25:48.849 READ: bw=54.0MiB/s (56.7MB/s), 54.0MiB/s-54.0MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2004-2004msec 00:25:48.849 WRITE: bw=54.1MiB/s (56.7MB/s), 54.1MiB/s-54.1MiB/s (56.7MB/s-56.7MB/s), io=108MiB (114MB), run=2004-2004msec 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:48.849 13:30:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:49.109 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:49.109 fio-3.35 00:25:49.110 Starting 1 thread 00:25:51.656 00:25:51.656 test: (groupid=0, jobs=1): err= 0: pid=1046909: Thu Dec 5 13:30:13 2024 00:25:51.656 read: IOPS=9335, BW=146MiB/s (153MB/s)(292MiB/2004msec) 00:25:51.656 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.58 00:25:51.656 clat (usec): min=1738, max=17681, avg=8230.20, stdev=1909.85 00:25:51.656 lat (usec): min=1741, max=17685, avg=8233.80, stdev=1909.96 00:25:51.656 clat percentiles (usec): 00:25:51.656 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5800], 20.00th=[ 6456], 00:25:51.656 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8160], 60.00th=[ 8717], 00:25:51.656 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10814], 95.00th=[11207], 00:25:51.656 | 99.00th=[12518], 99.50th=[13042], 99.90th=[13829], 99.95th=[14353], 00:25:51.656 | 99.99th=[15270] 00:25:51.656 bw ( KiB/s): min=61984, max=82336, per=49.21%, avg=73504.00, stdev=8541.10, samples=4 00:25:51.656 iops : min= 3874, max= 5146, avg=4594.00, stdev=533.82, samples=4 00:25:51.656 write: IOPS=5526, BW=86.4MiB/s (90.5MB/s)(151MiB/1745msec); 0 zone resets 00:25:51.656 slat (usec): min=39, max=337, avg=40.89, stdev= 6.73 00:25:51.656 clat (usec): min=3145, max=17271, avg=9463.38, stdev=1581.87 00:25:51.656 lat (usec): min=3185, max=17311, avg=9504.27, stdev=1582.82 00:25:51.656 clat percentiles (usec): 00:25:51.656 | 1.00th=[ 6259], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8160], 00:25:51.656 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:25:51.656 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12387], 00:25:51.656 | 99.00th=[13960], 99.50th=[14746], 99.90th=[16909], 99.95th=[16909], 00:25:51.656 | 99.99th=[17171] 00:25:51.656 bw ( KiB/s): min=64832, max=86016, per=86.59%, avg=76568.00, stdev=8885.69, samples=4 00:25:51.656 iops : min= 4052, max= 5376, avg=4785.50, stdev=555.36, samples=4 00:25:51.656 lat (msec) : 2=0.02%, 4=0.52%, 10=75.58%, 20=23.88% 00:25:51.656 cpu : usr=84.42%, sys=14.13%, ctx=19, majf=0, minf=42 00:25:51.656 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:51.656 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:51.656 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:51.656 issued rwts: total=18709,9644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:51.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:51.656 00:25:51.656 Run status group 0 (all jobs): 00:25:51.656 READ: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=292MiB (307MB), run=2004-2004msec 00:25:51.656 WRITE: bw=86.4MiB/s (90.5MB/s), 86.4MiB/s-86.4MiB/s (90.5MB/s-90.5MB/s), io=151MiB (158MB), run=1745-1745msec 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:51.656 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:51.656 rmmod nvme_tcp 00:25:51.961 rmmod nvme_fabrics 00:25:51.961 rmmod nvme_keyring 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1045090 ']' 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1045090 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1045090 ']' 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1045090 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1045090 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1045090' 00:25:51.961 killing process with pid 1045090 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1045090 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1045090 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:51.961 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:51.962 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:51.962 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.962 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.962 13:30:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:54.583 00:25:54.583 real 0m18.876s 00:25:54.583 user 1m7.477s 00:25:54.583 sys 0m8.229s 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.583 ************************************ 00:25:54.583 END TEST nvmf_fio_host 00:25:54.583 ************************************ 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.583 ************************************ 00:25:54.583 START TEST nvmf_failover 00:25:54.583 ************************************ 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:54.583 * Looking for test storage... 00:25:54.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.583 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.584 --rc genhtml_branch_coverage=1 00:25:54.584 --rc genhtml_function_coverage=1 00:25:54.584 --rc genhtml_legend=1 00:25:54.584 --rc geninfo_all_blocks=1 00:25:54.584 --rc geninfo_unexecuted_blocks=1 00:25:54.584 00:25:54.584 ' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.584 --rc genhtml_branch_coverage=1 00:25:54.584 --rc genhtml_function_coverage=1 00:25:54.584 --rc genhtml_legend=1 00:25:54.584 --rc geninfo_all_blocks=1 00:25:54.584 --rc geninfo_unexecuted_blocks=1 00:25:54.584 00:25:54.584 ' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.584 --rc genhtml_branch_coverage=1 00:25:54.584 --rc genhtml_function_coverage=1 00:25:54.584 --rc genhtml_legend=1 00:25:54.584 --rc geninfo_all_blocks=1 00:25:54.584 --rc geninfo_unexecuted_blocks=1 00:25:54.584 00:25:54.584 ' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:54.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.584 --rc genhtml_branch_coverage=1 00:25:54.584 --rc genhtml_function_coverage=1 00:25:54.584 --rc genhtml_legend=1 00:25:54.584 --rc geninfo_all_blocks=1 00:25:54.584 --rc geninfo_unexecuted_blocks=1 00:25:54.584 00:25:54.584 ' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:54.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:54.584 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:54.585 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:54.585 13:30:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:02.719 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:02.719 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:26:02.719 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:02.719 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:02.719 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:02.719 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:02.719 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:02.720 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:02.721 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:02.721 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:02.721 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:02.722 Found net devices under 0000:31:00.0: cvl_0_0 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:02.722 Found net devices under 0000:31:00.1: cvl_0_1 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:02.722 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:02.723 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:02.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:02.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:26:02.986 00:26:02.986 --- 10.0.0.2 ping statistics --- 00:26:02.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.986 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:02.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:02.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:26:02.986 00:26:02.986 --- 10.0.0.1 ping statistics --- 00:26:02.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:02.986 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1052242 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1052242 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1052242 ']' 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:02.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.986 13:30:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:02.986 [2024-12-05 13:30:25.504748] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:26:02.986 [2024-12-05 13:30:25.504820] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.245 [2024-12-05 13:30:25.612406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:03.245 [2024-12-05 13:30:25.663060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.245 [2024-12-05 13:30:25.663114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.245 [2024-12-05 13:30:25.663123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.245 [2024-12-05 13:30:25.663130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.245 [2024-12-05 13:30:25.663137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.245 [2024-12-05 13:30:25.664967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:03.245 [2024-12-05 13:30:25.665141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.245 [2024-12-05 13:30:25.665142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:03.816 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.816 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:03.817 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.817 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.817 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:03.817 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.817 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:04.077 [2024-12-05 13:30:26.493684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.077 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:04.337 Malloc0 00:26:04.337 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.597 13:30:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.597 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.857 [2024-12-05 13:30:27.237316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.857 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:04.857 [2024-12-05 13:30:27.413781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:05.118 [2024-12-05 13:30:27.590336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1052620 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1052620 /var/tmp/bdevperf.sock 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1052620 ']' 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.118 13:30:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.063 13:30:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.063 13:30:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:06.063 13:30:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:06.323 NVMe0n1 00:26:06.323 13:30:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:06.584 00:26:06.584 13:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1052948 00:26:06.584 13:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:06.584 13:30:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:07.972 13:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:07.972 [2024-12-05 13:30:30.283119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 [2024-12-05 13:30:30.283445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198d1c0 is same with the state(6) to be set 00:26:07.972 13:30:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:11.273 13:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:11.273 00:26:11.273 13:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:11.273 [2024-12-05 13:30:33.750761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.273 [2024-12-05 13:30:33.750889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.750995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 [2024-12-05 13:30:33.751139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198dee0 is same with the state(6) to be set 00:26:11.274 13:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:14.579 13:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:14.579 [2024-12-05 13:30:36.945077] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.579 13:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:15.520 13:30:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:15.780 [2024-12-05 13:30:38.140091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 [2024-12-05 13:30:38.140239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ad9590 is same with the state(6) to be set 00:26:15.780 13:30:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1052948 00:26:22.363 { 00:26:22.363 "results": [ 00:26:22.363 { 00:26:22.363 "job": "NVMe0n1", 00:26:22.363 "core_mask": "0x1", 00:26:22.363 "workload": "verify", 00:26:22.363 "status": "finished", 00:26:22.363 "verify_range": { 00:26:22.363 "start": 0, 00:26:22.363 "length": 16384 00:26:22.363 }, 00:26:22.363 "queue_depth": 128, 00:26:22.363 "io_size": 4096, 00:26:22.363 "runtime": 15.008383, 00:26:22.363 "iops": 11115.521239030213, 00:26:22.363 "mibps": 43.42000483996177, 00:26:22.363 "io_failed": 5189, 00:26:22.363 "io_timeout": 0, 00:26:22.363 "avg_latency_us": 11139.876156846785, 00:26:22.363 "min_latency_us": 788.48, 00:26:22.363 "max_latency_us": 14527.146666666667 00:26:22.363 } 00:26:22.363 ], 00:26:22.363 "core_count": 1 00:26:22.363 } 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1052620 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1052620 ']' 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1052620 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1052620 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1052620' 00:26:22.363 killing process with pid 1052620 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1052620 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1052620 00:26:22.363 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:22.363 [2024-12-05 13:30:27.672705] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:26:22.363 [2024-12-05 13:30:27.672766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1052620 ] 00:26:22.363 [2024-12-05 13:30:27.751741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.363 [2024-12-05 13:30:27.787958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.363 Running I/O for 15 seconds... 00:26:22.363 11217.00 IOPS, 43.82 MiB/s [2024-12-05T12:30:44.931Z] [2024-12-05 13:30:30.284456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.363 [2024-12-05 13:30:30.284684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.363 [2024-12-05 13:30:30.284692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.284984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.284993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.364 [2024-12-05 13:30:30.285363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.364 [2024-12-05 13:30:30.285370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.285386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.285403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.285420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.285437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.365 [2024-12-05 13:30:30.285974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.285983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.285991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.286000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.286007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.286016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.286024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.365 [2024-12-05 13:30:30.286033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.365 [2024-12-05 13:30:30.286040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.366 [2024-12-05 13:30:30.286510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:97912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.366 [2024-12-05 13:30:30.286627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:22.366 [2024-12-05 13:30:30.286658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:22.366 [2024-12-05 13:30:30.286664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97944 len:8 PRP1 0x0 PRP2 0x0 00:26:22.366 [2024-12-05 13:30:30.286672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286714] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:22.366 [2024-12-05 13:30:30.286735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.366 [2024-12-05 13:30:30.286743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.366 [2024-12-05 13:30:30.286759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.366 [2024-12-05 13:30:30.286768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.367 [2024-12-05 13:30:30.286775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:30.286783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.367 [2024-12-05 13:30:30.286790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:30.286798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:22.367 [2024-12-05 13:30:30.290386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:22.367 [2024-12-05 13:30:30.290411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2060d90 (9): Bad file descriptor 00:26:22.367 [2024-12-05 13:30:30.360359] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:22.367 10812.00 IOPS, 42.23 MiB/s [2024-12-05T12:30:44.935Z] 10899.33 IOPS, 42.58 MiB/s [2024-12-05T12:30:44.935Z] 10963.50 IOPS, 42.83 MiB/s [2024-12-05T12:30:44.935Z] [2024-12-05 13:30:33.752755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:27656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:27720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.752990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.752998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:27752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:27776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:27784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:27792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:27816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:27832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:27856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.367 [2024-12-05 13:30:33.753318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.367 [2024-12-05 13:30:33.753328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:27928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:27960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:27976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:28000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:28008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:28024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:28056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:28064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.368 [2024-12-05 13:30:33.753771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:28112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:28160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.368 [2024-12-05 13:30:33.753991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.368 [2024-12-05 13:30:33.753998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:28296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:28304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:28336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:28344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:28376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:28392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:28408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:28416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:28440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:28448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:28456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:28464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:28472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:28480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:28488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.369 [2024-12-05 13:30:33.754660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:28528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.369 [2024-12-05 13:30:33.754667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:28544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:28552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:28560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:28568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:28576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:28584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:28592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:28600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:28624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:28640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:28656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.370 [2024-12-05 13:30:33.754937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.754959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:22.370 [2024-12-05 13:30:33.754966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:22.370 [2024-12-05 13:30:33.754973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:28664 len:8 PRP1 0x0 PRP2 0x0 00:26:22.370 [2024-12-05 13:30:33.754983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.755022] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:22.370 [2024-12-05 13:30:33.755043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.370 [2024-12-05 13:30:33.755051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.755060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.370 [2024-12-05 13:30:33.755067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.755075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.370 [2024-12-05 13:30:33.755083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.755091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.370 [2024-12-05 13:30:33.755098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:33.755106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:22.370 [2024-12-05 13:30:33.755140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2060d90 (9): Bad file descriptor 00:26:22.370 [2024-12-05 13:30:33.758704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:22.370 [2024-12-05 13:30:33.781989] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:22.370 11042.80 IOPS, 43.14 MiB/s [2024-12-05T12:30:44.938Z] 11058.50 IOPS, 43.20 MiB/s [2024-12-05T12:30:44.938Z] 11065.00 IOPS, 43.22 MiB/s [2024-12-05T12:30:44.938Z] 11047.88 IOPS, 43.16 MiB/s [2024-12-05T12:30:44.938Z] [2024-12-05 13:30:38.140616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:31920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:31928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:31944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.370 [2024-12-05 13:30:38.140919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.370 [2024-12-05 13:30:38.140928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.140936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.140945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.140952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.140962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.140969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.140981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.140988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.140998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.371 [2024-12-05 13:30:38.141606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.371 [2024-12-05 13:30:38.141616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:32368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:32416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:32432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:32456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:32472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.141986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.141995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.372 [2024-12-05 13:30:38.142246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.372 [2024-12-05 13:30:38.142256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:32736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:32752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:22.373 [2024-12-05 13:30:38.142497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:22.373 [2024-12-05 13:30:38.142887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:22.373 [2024-12-05 13:30:38.142916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:22.373 [2024-12-05 13:30:38.142923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32936 len:8 PRP1 0x0 PRP2 0x0 00:26:22.373 [2024-12-05 13:30:38.142931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.142973] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:22.373 [2024-12-05 13:30:38.142996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.373 [2024-12-05 13:30:38.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.143013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.373 [2024-12-05 13:30:38.143021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.373 [2024-12-05 13:30:38.143030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.374 [2024-12-05 13:30:38.143039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.374 [2024-12-05 13:30:38.143047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:22.374 [2024-12-05 13:30:38.143055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:22.374 [2024-12-05 13:30:38.143063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:22.374 [2024-12-05 13:30:38.146650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:22.374 [2024-12-05 13:30:38.146677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2060d90 (9): Bad file descriptor 00:26:22.374 [2024-12-05 13:30:38.177944] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:22.374 11011.11 IOPS, 43.01 MiB/s [2024-12-05T12:30:44.942Z] 11056.20 IOPS, 43.19 MiB/s [2024-12-05T12:30:44.942Z] 11100.55 IOPS, 43.36 MiB/s [2024-12-05T12:30:44.942Z] 11113.33 IOPS, 43.41 MiB/s [2024-12-05T12:30:44.942Z] 11123.38 IOPS, 43.45 MiB/s [2024-12-05T12:30:44.942Z] 11122.50 IOPS, 43.45 MiB/s 00:26:22.374 Latency(us) 00:26:22.374 [2024-12-05T12:30:44.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.374 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:22.374 Verification LBA range: start 0x0 length 0x4000 00:26:22.374 NVMe0n1 : 15.01 11115.52 43.42 345.74 0.00 11139.88 788.48 14527.15 00:26:22.374 [2024-12-05T12:30:44.942Z] =================================================================================================================== 00:26:22.374 [2024-12-05T12:30:44.942Z] Total : 11115.52 43.42 345.74 0.00 11139.88 788.48 14527.15 00:26:22.374 Received shutdown signal, test time was about 15.000000 seconds 00:26:22.374 00:26:22.374 Latency(us) 00:26:22.374 [2024-12-05T12:30:44.942Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:22.374 [2024-12-05T12:30:44.942Z] =================================================================================================================== 00:26:22.374 [2024-12-05T12:30:44.942Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1055961 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1055961 /var/tmp/bdevperf.sock 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1055961 ']' 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:22.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:22.374 13:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:22.944 13:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.944 13:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:22.944 13:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:22.944 [2024-12-05 13:30:45.482476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.204 13:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:23.204 [2024-12-05 13:30:45.666939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:23.204 13:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:23.464 NVMe0n1 00:26:23.464 13:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:23.724 00:26:23.724 13:30:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:24.296 00:26:24.296 13:30:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:24.296 13:30:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:24.296 13:30:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:24.558 13:30:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:27.862 13:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:27.862 13:30:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:27.862 13:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1056978 00:26:27.862 13:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:27.862 13:30:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1056978 00:26:28.821 { 00:26:28.821 "results": [ 00:26:28.821 { 00:26:28.821 "job": "NVMe0n1", 00:26:28.821 "core_mask": "0x1", 00:26:28.821 "workload": "verify", 00:26:28.821 "status": "finished", 00:26:28.821 "verify_range": { 00:26:28.821 "start": 0, 00:26:28.821 "length": 16384 00:26:28.821 }, 00:26:28.821 "queue_depth": 128, 00:26:28.821 "io_size": 4096, 00:26:28.821 "runtime": 1.004858, 00:26:28.821 "iops": 11411.562628749534, 00:26:28.821 "mibps": 44.57641651855287, 00:26:28.821 "io_failed": 0, 00:26:28.821 "io_timeout": 0, 00:26:28.821 "avg_latency_us": 11163.140569169502, 00:26:28.821 "min_latency_us": 481.28, 00:26:28.821 "max_latency_us": 9284.266666666666 00:26:28.821 } 00:26:28.821 ], 00:26:28.821 "core_count": 1 00:26:28.821 } 00:26:28.821 13:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:28.821 [2024-12-05 13:30:44.534848] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:26:28.821 [2024-12-05 13:30:44.534928] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055961 ] 00:26:28.821 [2024-12-05 13:30:44.613667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.821 [2024-12-05 13:30:44.650211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.821 [2024-12-05 13:30:46.937503] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:28.821 [2024-12-05 13:30:46.937550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.821 [2024-12-05 13:30:46.937562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.821 [2024-12-05 13:30:46.937571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.821 [2024-12-05 13:30:46.937579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.821 [2024-12-05 13:30:46.937587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.821 [2024-12-05 13:30:46.937595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.821 [2024-12-05 13:30:46.937603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:28.821 [2024-12-05 13:30:46.937610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.821 [2024-12-05 13:30:46.937618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:28.821 [2024-12-05 13:30:46.937646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:28.821 [2024-12-05 13:30:46.937661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x573d90 (9): Bad file descriptor 00:26:28.821 [2024-12-05 13:30:46.943881] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:28.821 Running I/O for 1 seconds... 00:26:28.821 11339.00 IOPS, 44.29 MiB/s 00:26:28.821 Latency(us) 00:26:28.821 [2024-12-05T12:30:51.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:28.821 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:28.821 Verification LBA range: start 0x0 length 0x4000 00:26:28.821 NVMe0n1 : 1.00 11411.56 44.58 0.00 0.00 11163.14 481.28 9284.27 00:26:28.821 [2024-12-05T12:30:51.389Z] =================================================================================================================== 00:26:28.821 [2024-12-05T12:30:51.389Z] Total : 11411.56 44.58 0.00 0.00 11163.14 481.28 9284.27 00:26:28.821 13:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:28.821 13:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:29.082 13:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:29.082 13:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:29.082 13:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.342 13:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:29.602 13:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1055961 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1055961 ']' 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1055961 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1055961 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1055961' 00:26:32.899 killing process with pid 1055961 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1055961 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1055961 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:32.899 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.158 rmmod nvme_tcp 00:26:33.158 rmmod nvme_fabrics 00:26:33.158 rmmod nvme_keyring 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1052242 ']' 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1052242 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1052242 ']' 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1052242 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1052242 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1052242' 00:26:33.158 killing process with pid 1052242 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1052242 00:26:33.158 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1052242 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.417 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.418 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.418 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.418 13:30:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:35.960 00:26:35.960 real 0m41.287s 00:26:35.960 user 2m3.970s 00:26:35.960 sys 0m9.324s 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:35.960 ************************************ 00:26:35.960 END TEST nvmf_failover 00:26:35.960 ************************************ 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.960 ************************************ 00:26:35.960 START TEST nvmf_host_discovery 00:26:35.960 ************************************ 00:26:35.960 13:30:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:35.960 * Looking for test storage... 00:26:35.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:35.960 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:35.960 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:35.960 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:35.960 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.961 --rc genhtml_branch_coverage=1 00:26:35.961 --rc genhtml_function_coverage=1 00:26:35.961 --rc genhtml_legend=1 00:26:35.961 --rc geninfo_all_blocks=1 00:26:35.961 --rc geninfo_unexecuted_blocks=1 00:26:35.961 00:26:35.961 ' 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.961 --rc genhtml_branch_coverage=1 00:26:35.961 --rc genhtml_function_coverage=1 00:26:35.961 --rc genhtml_legend=1 00:26:35.961 --rc geninfo_all_blocks=1 00:26:35.961 --rc geninfo_unexecuted_blocks=1 00:26:35.961 00:26:35.961 ' 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.961 --rc genhtml_branch_coverage=1 00:26:35.961 --rc genhtml_function_coverage=1 00:26:35.961 --rc genhtml_legend=1 00:26:35.961 --rc geninfo_all_blocks=1 00:26:35.961 --rc geninfo_unexecuted_blocks=1 00:26:35.961 00:26:35.961 ' 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:35.961 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.961 --rc genhtml_branch_coverage=1 00:26:35.961 --rc genhtml_function_coverage=1 00:26:35.961 --rc genhtml_legend=1 00:26:35.961 --rc geninfo_all_blocks=1 00:26:35.961 --rc geninfo_unexecuted_blocks=1 00:26:35.961 00:26:35.961 ' 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.961 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.962 13:30:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:44.094 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:44.094 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:44.094 Found net devices under 0000:31:00.0: cvl_0_0 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:44.094 Found net devices under 0000:31:00.1: cvl_0_1 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.094 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.095 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.095 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.095 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.095 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.095 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.095 13:31:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.719 ms 00:26:44.095 00:26:44.095 --- 10.0.0.2 ping statistics --- 00:26:44.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.095 rtt min/avg/max/mdev = 0.719/0.719/0.719/0.000 ms 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:26:44.095 00:26:44.095 --- 10.0.0.1 ping statistics --- 00:26:44.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.095 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1062689 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1062689 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1062689 ']' 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.095 13:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.095 [2024-12-05 13:31:06.396124] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:26:44.095 [2024-12-05 13:31:06.396218] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.095 [2024-12-05 13:31:06.505267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.095 [2024-12-05 13:31:06.556271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.095 [2024-12-05 13:31:06.556323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.095 [2024-12-05 13:31:06.556332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.095 [2024-12-05 13:31:06.556339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.095 [2024-12-05 13:31:06.556345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.095 [2024-12-05 13:31:06.557161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.668 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:44.668 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:44.668 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.668 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:44.668 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 [2024-12-05 13:31:07.251976] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 [2024-12-05 13:31:07.264249] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 null0 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 null1 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1063013 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1063013 /tmp/host.sock 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1063013 ']' 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:44.928 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.928 13:31:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.928 [2024-12-05 13:31:07.361261] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:26:44.928 [2024-12-05 13:31:07.361324] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1063013 ] 00:26:44.928 [2024-12-05 13:31:07.444839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:44.928 [2024-12-05 13:31:07.486710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:45.868 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.869 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.130 [2024-12-05 13:31:08.495274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:46.130 13:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:46.701 [2024-12-05 13:31:09.220775] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:46.701 [2024-12-05 13:31:09.220795] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:46.701 [2024-12-05 13:31:09.220809] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:46.961 [2024-12-05 13:31:09.308081] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:46.961 [2024-12-05 13:31:09.409944] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:46.961 [2024-12-05 13:31:09.410903] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d644d0:1 started. 00:26:46.961 [2024-12-05 13:31:09.412501] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:46.961 [2024-12-05 13:31:09.412519] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:46.961 [2024-12-05 13:31:09.419496] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d644d0 was disconnected and freed. delete nvme_qpair. 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.221 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.482 13:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.744 [2024-12-05 13:31:10.068702] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d649d0:1 started. 00:26:47.744 [2024-12-05 13:31:10.072341] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d649d0 was disconnected and freed. delete nvme_qpair. 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:47.744 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.745 [2024-12-05 13:31:10.159769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:47.745 [2024-12-05 13:31:10.160304] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:47.745 [2024-12-05 13:31:10.160325] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.745 [2024-12-05 13:31:10.247031] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:47.745 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.005 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:48.005 13:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:48.005 [2024-12-05 13:31:10.347061] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:48.005 [2024-12-05 13:31:10.347100] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:48.005 [2024-12-05 13:31:10.347109] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:48.005 [2024-12-05 13:31:10.347115] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:48.944 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.944 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:48.944 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:48.944 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:48.944 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.945 [2024-12-05 13:31:11.431846] bdev_nvme.c:7466:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:48.945 [2024-12-05 13:31:11.431876] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:48.945 [2024-12-05 13:31:11.433282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.945 [2024-12-05 13:31:11.433299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.945 [2024-12-05 13:31:11.433309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.945 [2024-12-05 13:31:11.433317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.945 [2024-12-05 13:31:11.433325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.945 [2024-12-05 13:31:11.433337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.945 [2024-12-05 13:31:11.433345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:48.945 [2024-12-05 13:31:11.433353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.945 [2024-12-05 13:31:11.433360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.945 [2024-12-05 13:31:11.443295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:48.945 [2024-12-05 13:31:11.453329] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.945 [2024-12-05 13:31:11.453342] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.945 [2024-12-05 13:31:11.453347] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.945 [2024-12-05 13:31:11.453353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.945 [2024-12-05 13:31:11.453370] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:48.945 [2024-12-05 13:31:11.453568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.945 [2024-12-05 13:31:11.453583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d34c10 with addr=10.0.0.2, port=4420 00:26:48.945 [2024-12-05 13:31:11.453591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:48.945 [2024-12-05 13:31:11.453603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:48.945 [2024-12-05 13:31:11.453614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.945 [2024-12-05 13:31:11.453621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.945 [2024-12-05 13:31:11.453630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.945 [2024-12-05 13:31:11.453636] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.945 [2024-12-05 13:31:11.453642] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.945 [2024-12-05 13:31:11.453650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:48.945 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.945 [2024-12-05 13:31:11.463401] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.945 [2024-12-05 13:31:11.463412] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.945 [2024-12-05 13:31:11.463417] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.945 [2024-12-05 13:31:11.463422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.945 [2024-12-05 13:31:11.463436] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:48.945 [2024-12-05 13:31:11.463721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.945 [2024-12-05 13:31:11.463734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d34c10 with addr=10.0.0.2, port=4420 00:26:48.945 [2024-12-05 13:31:11.463741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:48.946 [2024-12-05 13:31:11.463752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:48.946 [2024-12-05 13:31:11.463763] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.946 [2024-12-05 13:31:11.463769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.946 [2024-12-05 13:31:11.463776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.946 [2024-12-05 13:31:11.463783] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.946 [2024-12-05 13:31:11.463788] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.946 [2024-12-05 13:31:11.463792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:48.946 [2024-12-05 13:31:11.473467] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.946 [2024-12-05 13:31:11.473478] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.946 [2024-12-05 13:31:11.473483] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.473488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.946 [2024-12-05 13:31:11.473501] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.473788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.946 [2024-12-05 13:31:11.473800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d34c10 with addr=10.0.0.2, port=4420 00:26:48.946 [2024-12-05 13:31:11.473807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:48.946 [2024-12-05 13:31:11.473818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:48.946 [2024-12-05 13:31:11.473828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.946 [2024-12-05 13:31:11.473834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.946 [2024-12-05 13:31:11.473842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.946 [2024-12-05 13:31:11.473851] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.946 [2024-12-05 13:31:11.473856] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.946 [2024-12-05 13:31:11.473860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:48.946 [2024-12-05 13:31:11.483533] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.946 [2024-12-05 13:31:11.483547] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.946 [2024-12-05 13:31:11.483551] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.483556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.946 [2024-12-05 13:31:11.483571] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.483761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.946 [2024-12-05 13:31:11.483773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d34c10 with addr=10.0.0.2, port=4420 00:26:48.946 [2024-12-05 13:31:11.483780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:48.946 [2024-12-05 13:31:11.483791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:48.946 [2024-12-05 13:31:11.483802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.946 [2024-12-05 13:31:11.483808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.946 [2024-12-05 13:31:11.483815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.946 [2024-12-05 13:31:11.483822] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.946 [2024-12-05 13:31:11.483826] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.946 [2024-12-05 13:31:11.483831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:48.946 [2024-12-05 13:31:11.493602] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.946 [2024-12-05 13:31:11.493615] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.946 [2024-12-05 13:31:11.493620] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.493624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.946 [2024-12-05 13:31:11.493642] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.494101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.946 [2024-12-05 13:31:11.494140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d34c10 with addr=10.0.0.2, port=4420 00:26:48.946 [2024-12-05 13:31:11.494151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:48.946 [2024-12-05 13:31:11.494170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:48.946 [2024-12-05 13:31:11.494208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.946 [2024-12-05 13:31:11.494218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.946 [2024-12-05 13:31:11.494226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.946 [2024-12-05 13:31:11.494233] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.946 [2024-12-05 13:31:11.494239] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.946 [2024-12-05 13:31:11.494243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.946 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.946 [2024-12-05 13:31:11.503675] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:48.946 [2024-12-05 13:31:11.503692] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:48.946 [2024-12-05 13:31:11.503697] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.503702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:48.946 [2024-12-05 13:31:11.503718] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:48.946 [2024-12-05 13:31:11.504092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:48.946 [2024-12-05 13:31:11.504130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d34c10 with addr=10.0.0.2, port=4420 00:26:48.947 [2024-12-05 13:31:11.504143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:48.947 [2024-12-05 13:31:11.504163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:48.947 [2024-12-05 13:31:11.504188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:48.947 [2024-12-05 13:31:11.504196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:48.947 [2024-12-05 13:31:11.504204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:48.947 [2024-12-05 13:31:11.504212] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:48.947 [2024-12-05 13:31:11.504217] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:48.947 [2024-12-05 13:31:11.504226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.216 [2024-12-05 13:31:11.513752] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:49.216 [2024-12-05 13:31:11.513767] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:49.216 [2024-12-05 13:31:11.513772] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:49.216 [2024-12-05 13:31:11.513777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:49.216 [2024-12-05 13:31:11.513793] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:49.216 [2024-12-05 13:31:11.514096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.216 [2024-12-05 13:31:11.514111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d34c10 with addr=10.0.0.2, port=4420 00:26:49.216 [2024-12-05 13:31:11.514118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d34c10 is same with the state(6) to be set 00:26:49.216 [2024-12-05 13:31:11.514130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d34c10 (9): Bad file descriptor 00:26:49.216 [2024-12-05 13:31:11.514140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:49.216 [2024-12-05 13:31:11.514146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:49.216 [2024-12-05 13:31:11.514154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:49.216 [2024-12-05 13:31:11.514160] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:49.216 [2024-12-05 13:31:11.514165] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:49.216 [2024-12-05 13:31:11.514169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:49.216 [2024-12-05 13:31:11.519234] bdev_nvme.c:7271:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:49.216 [2024-12-05 13:31:11.519252] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:49.216 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:49.217 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.476 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:49.476 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:49.476 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:49.476 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:49.476 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:49.476 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.476 13:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.467 [2024-12-05 13:31:12.871104] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:50.467 [2024-12-05 13:31:12.871123] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:50.467 [2024-12-05 13:31:12.871136] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:50.467 [2024-12-05 13:31:12.958404] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:50.727 [2024-12-05 13:31:13.225763] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:50.727 [2024-12-05 13:31:13.226570] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1d4ae90:1 started. 00:26:50.727 [2024-12-05 13:31:13.228430] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:50.727 [2024-12-05 13:31:13.228458] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:50.727 [2024-12-05 13:31:13.230706] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1d4ae90 was disconnected and freed. delete nvme_qpair. 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.727 request: 00:26:50.727 { 00:26:50.727 "name": "nvme", 00:26:50.727 "trtype": "tcp", 00:26:50.727 "traddr": "10.0.0.2", 00:26:50.727 "adrfam": "ipv4", 00:26:50.727 "trsvcid": "8009", 00:26:50.727 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:50.727 "wait_for_attach": true, 00:26:50.727 "method": "bdev_nvme_start_discovery", 00:26:50.727 "req_id": 1 00:26:50.727 } 00:26:50.727 Got JSON-RPC error response 00:26:50.727 response: 00:26:50.727 { 00:26:50.727 "code": -17, 00:26:50.727 "message": "File exists" 00:26:50.727 } 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:50.727 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.988 request: 00:26:50.988 { 00:26:50.988 "name": "nvme_second", 00:26:50.988 "trtype": "tcp", 00:26:50.988 "traddr": "10.0.0.2", 00:26:50.988 "adrfam": "ipv4", 00:26:50.988 "trsvcid": "8009", 00:26:50.988 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:50.988 "wait_for_attach": true, 00:26:50.988 "method": "bdev_nvme_start_discovery", 00:26:50.988 "req_id": 1 00:26:50.988 } 00:26:50.988 Got JSON-RPC error response 00:26:50.988 response: 00:26:50.988 { 00:26:50.988 "code": -17, 00:26:50.988 "message": "File exists" 00:26:50.988 } 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:50.988 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:50.989 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:50.989 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:50.989 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:50.989 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.989 13:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.929 [2024-12-05 13:31:14.487887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.929 [2024-12-05 13:31:14.487918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4d7d0 with addr=10.0.0.2, port=8010 00:26:51.929 [2024-12-05 13:31:14.487931] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:51.929 [2024-12-05 13:31:14.487939] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:51.929 [2024-12-05 13:31:14.487946] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:52.970 [2024-12-05 13:31:15.490282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.970 [2024-12-05 13:31:15.490305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d4ae90 with addr=10.0.0.2, port=8010 00:26:52.970 [2024-12-05 13:31:15.490316] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:52.970 [2024-12-05 13:31:15.490322] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:52.971 [2024-12-05 13:31:15.490329] bdev_nvme.c:7552:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:54.352 [2024-12-05 13:31:16.492247] bdev_nvme.c:7527:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:54.352 request: 00:26:54.352 { 00:26:54.352 "name": "nvme_second", 00:26:54.352 "trtype": "tcp", 00:26:54.352 "traddr": "10.0.0.2", 00:26:54.352 "adrfam": "ipv4", 00:26:54.352 "trsvcid": "8010", 00:26:54.352 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:54.352 "wait_for_attach": false, 00:26:54.352 "attach_timeout_ms": 3000, 00:26:54.352 "method": "bdev_nvme_start_discovery", 00:26:54.352 "req_id": 1 00:26:54.352 } 00:26:54.352 Got JSON-RPC error response 00:26:54.352 response: 00:26:54.352 { 00:26:54.352 "code": -110, 00:26:54.352 "message": "Connection timed out" 00:26:54.352 } 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1063013 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:54.352 rmmod nvme_tcp 00:26:54.352 rmmod nvme_fabrics 00:26:54.352 rmmod nvme_keyring 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1062689 ']' 00:26:54.352 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1062689 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1062689 ']' 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1062689 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1062689 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1062689' 00:26:54.353 killing process with pid 1062689 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1062689 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1062689 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:54.353 13:31:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:56.899 00:26:56.899 real 0m20.882s 00:26:56.899 user 0m23.606s 00:26:56.899 sys 0m7.634s 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.899 ************************************ 00:26:56.899 END TEST nvmf_host_discovery 00:26:56.899 ************************************ 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.899 ************************************ 00:26:56.899 START TEST nvmf_host_multipath_status 00:26:56.899 ************************************ 00:26:56.899 13:31:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:56.899 * Looking for test storage... 00:26:56.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:56.899 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.900 --rc genhtml_branch_coverage=1 00:26:56.900 --rc genhtml_function_coverage=1 00:26:56.900 --rc genhtml_legend=1 00:26:56.900 --rc geninfo_all_blocks=1 00:26:56.900 --rc geninfo_unexecuted_blocks=1 00:26:56.900 00:26:56.900 ' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.900 --rc genhtml_branch_coverage=1 00:26:56.900 --rc genhtml_function_coverage=1 00:26:56.900 --rc genhtml_legend=1 00:26:56.900 --rc geninfo_all_blocks=1 00:26:56.900 --rc geninfo_unexecuted_blocks=1 00:26:56.900 00:26:56.900 ' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.900 --rc genhtml_branch_coverage=1 00:26:56.900 --rc genhtml_function_coverage=1 00:26:56.900 --rc genhtml_legend=1 00:26:56.900 --rc geninfo_all_blocks=1 00:26:56.900 --rc geninfo_unexecuted_blocks=1 00:26:56.900 00:26:56.900 ' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.900 --rc genhtml_branch_coverage=1 00:26:56.900 --rc genhtml_function_coverage=1 00:26:56.900 --rc genhtml_legend=1 00:26:56.900 --rc geninfo_all_blocks=1 00:26:56.900 --rc geninfo_unexecuted_blocks=1 00:26:56.900 00:26:56.900 ' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:56.900 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:56.900 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:56.901 13:31:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.047 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:05.048 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:05.048 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:05.048 Found net devices under 0000:31:00.0: cvl_0_0 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:05.048 Found net devices under 0000:31:00.1: cvl_0_1 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.048 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:27:05.049 00:27:05.049 --- 10.0.0.2 ping statistics --- 00:27:05.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.049 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:27:05.049 00:27:05.049 --- 10.0.0.1 ping statistics --- 00:27:05.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.049 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.049 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1069595 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1069595 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1069595 ']' 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:05.308 13:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:05.308 [2024-12-05 13:31:27.707408] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:27:05.308 [2024-12-05 13:31:27.707479] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.308 [2024-12-05 13:31:27.799142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:05.308 [2024-12-05 13:31:27.839917] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.308 [2024-12-05 13:31:27.839957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.308 [2024-12-05 13:31:27.839965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.309 [2024-12-05 13:31:27.839971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.309 [2024-12-05 13:31:27.839977] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.309 [2024-12-05 13:31:27.841190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.309 [2024-12-05 13:31:27.841193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1069595 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:06.247 [2024-12-05 13:31:28.692574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:06.247 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:06.506 Malloc0 00:27:06.506 13:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:06.506 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:06.765 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:07.024 [2024-12-05 13:31:29.391223] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:07.025 [2024-12-05 13:31:29.555599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1070040 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1070040 /var/tmp/bdevperf.sock 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1070040 ']' 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:07.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:07.025 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:07.284 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.284 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:07.284 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:07.544 13:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:07.802 Nvme0n1 00:27:07.802 13:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:08.061 Nvme0n1 00:27:08.061 13:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:08.061 13:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:10.638 13:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:10.638 13:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:10.638 13:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:10.638 13:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:11.580 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:11.580 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.580 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.580 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.840 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.100 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.100 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.100 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.100 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.359 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.359 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.359 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.359 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.618 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.619 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:12.619 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.619 13:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.619 13:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.619 13:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:12.619 13:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:12.878 13:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:13.137 13:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:14.078 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:14.078 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:14.078 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.078 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.339 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.339 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:14.339 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.339 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:14.599 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.599 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:14.599 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.599 13:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:14.599 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.599 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:14.599 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.599 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:14.859 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.859 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:14.859 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.859 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.119 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.119 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:15.119 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.119 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:15.119 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.119 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:15.119 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:15.378 13:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:15.637 13:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:16.572 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:16.572 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:16.572 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.572 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:16.833 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.833 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:16.833 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.833 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:17.092 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.350 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.350 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:17.350 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:17.350 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.609 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.609 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:17.609 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.609 13:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:17.609 13:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.609 13:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:17.609 13:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:17.868 13:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:18.126 13:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:19.061 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:19.061 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:19.061 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.061 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:19.321 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.321 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:19.321 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.321 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:19.582 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:19.582 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:19.582 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.582 13:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:19.582 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.582 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:19.582 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.582 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:19.841 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.841 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:19.841 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:19.841 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:20.100 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.100 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:20.100 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.100 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:20.100 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.100 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:20.359 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:20.359 13:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:20.618 13:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:21.557 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:21.557 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:21.557 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.557 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:21.817 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.817 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:21.817 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.817 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.077 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:22.338 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.338 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:22.338 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.338 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:22.606 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.606 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:22.606 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.606 13:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:22.606 13:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.606 13:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:22.606 13:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:22.869 13:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:23.131 13:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:24.072 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:24.072 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:24.072 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.072 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:24.072 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:24.072 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:24.333 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.333 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:24.333 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.333 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:24.333 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.333 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:24.592 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.592 13:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:24.592 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.593 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:24.853 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:25.121 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:25.121 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:25.392 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:25.392 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:25.392 13:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:25.672 13:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:26.624 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:26.624 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:26.624 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.624 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:26.884 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.884 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:26.884 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.884 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.145 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:27.405 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.405 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:27.405 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.405 13:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:27.665 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.665 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:27.665 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:27.665 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:27.925 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:27.925 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:27.925 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:27.925 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:28.184 13:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:29.123 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:29.123 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:29.123 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.123 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:29.383 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:29.383 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:29.383 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.383 13:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:29.644 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.644 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:29.645 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.645 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:29.645 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.645 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:29.645 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.645 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:29.908 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.908 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:29.908 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.908 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:30.169 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.169 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:30.169 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:30.169 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:30.430 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:30.430 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:30.430 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:30.430 13:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:30.689 13:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:31.623 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:31.623 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:31.623 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.623 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:31.885 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.885 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:31.885 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.885 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:32.144 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.144 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:32.144 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.144 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:32.144 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.144 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:32.145 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.145 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:32.405 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.405 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:32.405 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.405 13:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:32.665 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.665 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:32.665 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.665 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:32.665 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.665 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:32.665 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:32.925 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:33.184 13:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:34.123 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:34.123 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:34.123 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.123 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:34.384 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.384 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:34.384 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.384 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:34.644 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:34.644 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:34.644 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.644 13:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:34.644 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.644 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:34.644 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.644 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:34.905 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:34.905 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:34.905 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:34.905 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1070040 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1070040 ']' 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1070040 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:35.166 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1070040 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1070040' 00:27:35.433 killing process with pid 1070040 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1070040 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1070040 00:27:35.433 { 00:27:35.433 "results": [ 00:27:35.433 { 00:27:35.433 "job": "Nvme0n1", 00:27:35.433 "core_mask": "0x4", 00:27:35.433 "workload": "verify", 00:27:35.433 "status": "terminated", 00:27:35.433 "verify_range": { 00:27:35.433 "start": 0, 00:27:35.433 "length": 16384 00:27:35.433 }, 00:27:35.433 "queue_depth": 128, 00:27:35.433 "io_size": 4096, 00:27:35.433 "runtime": 27.038163, 00:27:35.433 "iops": 10808.16769985446, 00:27:35.433 "mibps": 42.21940507755649, 00:27:35.433 "io_failed": 0, 00:27:35.433 "io_timeout": 0, 00:27:35.433 "avg_latency_us": 11826.122997106191, 00:27:35.433 "min_latency_us": 279.8933333333333, 00:27:35.433 "max_latency_us": 3019898.88 00:27:35.433 } 00:27:35.433 ], 00:27:35.433 "core_count": 1 00:27:35.433 } 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1070040 00:27:35.433 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:35.433 [2024-12-05 13:31:29.619847] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:27:35.433 [2024-12-05 13:31:29.619914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1070040 ] 00:27:35.433 [2024-12-05 13:31:29.684755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.433 [2024-12-05 13:31:29.713803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:35.433 Running I/O for 90 seconds... 00:27:35.433 9557.00 IOPS, 37.33 MiB/s [2024-12-05T12:31:58.001Z] 9570.50 IOPS, 37.38 MiB/s [2024-12-05T12:31:58.001Z] 9607.67 IOPS, 37.53 MiB/s [2024-12-05T12:31:58.001Z] 9620.50 IOPS, 37.58 MiB/s [2024-12-05T12:31:58.001Z] 9844.20 IOPS, 38.45 MiB/s [2024-12-05T12:31:58.001Z] 10347.67 IOPS, 40.42 MiB/s [2024-12-05T12:31:58.001Z] 10710.29 IOPS, 41.84 MiB/s [2024-12-05T12:31:58.001Z] 10739.38 IOPS, 41.95 MiB/s [2024-12-05T12:31:58.001Z] 10624.00 IOPS, 41.50 MiB/s [2024-12-05T12:31:58.001Z] 10527.50 IOPS, 41.12 MiB/s [2024-12-05T12:31:58.001Z] 10454.45 IOPS, 40.84 MiB/s [2024-12-05T12:31:58.001Z] 10386.75 IOPS, 40.57 MiB/s [2024-12-05T12:31:58.001Z] [2024-12-05 13:31:42.825429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.433 [2024-12-05 13:31:42.825459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.433 [2024-12-05 13:31:42.825490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.433 [2024-12-05 13:31:42.825497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:35.433 [2024-12-05 13:31:42.825508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.433 [2024-12-05 13:31:42.825513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:35.433 [2024-12-05 13:31:42.825524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.433 [2024-12-05 13:31:42.825529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:35.433 [2024-12-05 13:31:42.825539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.433 [2024-12-05 13:31:42.825544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:35.433 [2024-12-05 13:31:42.825554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.825731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.825737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:87032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:87040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:87048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:87056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:87064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:87072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:87088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:87096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:87112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:87120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:87136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:87144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:87152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:87168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:87192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:87200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:87208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:87216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:87224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.434 [2024-12-05 13:31:42.826605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.434 [2024-12-05 13:31:42.826616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:87248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:87272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:87288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:87320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.826983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.826995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:87336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:87344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:87368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:87376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:87384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-12-05 13:31:42.827144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-12-05 13:31:42.827162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-12-05 13:31:42.827179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:86872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-12-05 13:31:42.827198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-12-05 13:31:42.827278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-12-05 13:31:42.827299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.435 [2024-12-05 13:31:42.827318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:87400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:87408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:87424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:87432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:87440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:87448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:87464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:87480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:87488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:87496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.435 [2024-12-05 13:31:42.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:35.435 [2024-12-05 13:31:42.827641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:87504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:87520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:87528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:87536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:87544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:87552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:87560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:87568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.827907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.827912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:87600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:87616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:87624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:87656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:87672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:87680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:87712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:87720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:87728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:87736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:87744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.828883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.828887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.829020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.829028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.829045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.829049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.829065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.436 [2024-12-05 13:31:42.829071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:35.436 [2024-12-05 13:31:42.829087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:42.829092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:42.829107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:42.829112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:42.829128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:42.829133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:42.829149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:42.829155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:42.829171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:42.829176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:42.829210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:42.829216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:35.437 9657.15 IOPS, 37.72 MiB/s [2024-12-05T12:31:58.005Z] 8967.36 IOPS, 35.03 MiB/s [2024-12-05T12:31:58.005Z] 8369.53 IOPS, 32.69 MiB/s [2024-12-05T12:31:58.005Z] 8577.06 IOPS, 33.50 MiB/s [2024-12-05T12:31:58.005Z] 8828.71 IOPS, 34.49 MiB/s [2024-12-05T12:31:58.005Z] 9233.28 IOPS, 36.07 MiB/s [2024-12-05T12:31:58.005Z] 9638.53 IOPS, 37.65 MiB/s [2024-12-05T12:31:58.005Z] 9952.50 IOPS, 38.88 MiB/s [2024-12-05T12:31:58.005Z] 10091.33 IOPS, 39.42 MiB/s [2024-12-05T12:31:58.005Z] 10219.05 IOPS, 39.92 MiB/s [2024-12-05T12:31:58.005Z] 10440.35 IOPS, 40.78 MiB/s [2024-12-05T12:31:58.005Z] 10710.21 IOPS, 41.84 MiB/s [2024-12-05T12:31:58.005Z] [2024-12-05 13:31:55.558746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:55.558783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.558813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:55.558819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.558831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:55.558841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.558852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.558858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.558872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.558878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.558888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.558893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.558904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.558909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.559321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.559330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.559342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:67056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.559347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.559358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:55.559363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.559373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:35.437 [2024-12-05 13:31:55.559378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.559389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.559394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.559404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.559409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.559420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.559425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:35.437 [2024-12-05 13:31:55.560068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.437 [2024-12-05 13:31:55.560078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:35.437 10898.24 IOPS, 42.57 MiB/s [2024-12-05T12:31:58.005Z] 10851.42 IOPS, 42.39 MiB/s [2024-12-05T12:31:58.005Z] 10809.22 IOPS, 42.22 MiB/s [2024-12-05T12:31:58.005Z] Received shutdown signal, test time was about 27.038774 seconds 00:27:35.437 00:27:35.437 Latency(us) 00:27:35.437 [2024-12-05T12:31:58.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:35.437 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:35.437 Verification LBA range: start 0x0 length 0x4000 00:27:35.437 Nvme0n1 : 27.04 10808.17 42.22 0.00 0.00 11826.12 279.89 3019898.88 00:27:35.437 [2024-12-05T12:31:58.005Z] =================================================================================================================== 00:27:35.437 [2024-12-05T12:31:58.005Z] Total : 10808.17 42.22 0.00 0.00 11826.12 279.89 3019898.88 00:27:35.437 13:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:35.699 rmmod nvme_tcp 00:27:35.699 rmmod nvme_fabrics 00:27:35.699 rmmod nvme_keyring 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1069595 ']' 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1069595 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1069595 ']' 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1069595 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1069595 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1069595' 00:27:35.699 killing process with pid 1069595 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1069595 00:27:35.699 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1069595 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.962 13:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:37.875 13:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:37.875 00:27:37.875 real 0m41.473s 00:27:37.875 user 1m44.646s 00:27:37.875 sys 0m12.401s 00:27:37.875 13:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:37.875 13:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:37.875 ************************************ 00:27:37.875 END TEST nvmf_host_multipath_status 00:27:37.875 ************************************ 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.135 ************************************ 00:27:38.135 START TEST nvmf_discovery_remove_ifc 00:27:38.135 ************************************ 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:38.135 * Looking for test storage... 00:27:38.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:38.135 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:38.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.396 --rc genhtml_branch_coverage=1 00:27:38.396 --rc genhtml_function_coverage=1 00:27:38.396 --rc genhtml_legend=1 00:27:38.396 --rc geninfo_all_blocks=1 00:27:38.396 --rc geninfo_unexecuted_blocks=1 00:27:38.396 00:27:38.396 ' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.396 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:38.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.397 13:32:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:46.529 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:46.529 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.529 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:46.530 Found net devices under 0000:31:00.0: cvl_0_0 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:46.530 Found net devices under 0000:31:00.1: cvl_0_1 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:46.530 13:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:46.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.627 ms 00:27:46.791 00:27:46.791 --- 10.0.0.2 ping statistics --- 00:27:46.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.791 rtt min/avg/max/mdev = 0.627/0.627/0.627/0.000 ms 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:27:46.791 00:27:46.791 --- 10.0.0.1 ping statistics --- 00:27:46.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.791 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1080490 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1080490 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1080490 ']' 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.791 13:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.791 [2024-12-05 13:32:09.241147] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:27:46.791 [2024-12-05 13:32:09.241216] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.791 [2024-12-05 13:32:09.349303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.052 [2024-12-05 13:32:09.399653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:47.052 [2024-12-05 13:32:09.399708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:47.052 [2024-12-05 13:32:09.399717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:47.052 [2024-12-05 13:32:09.399724] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:47.052 [2024-12-05 13:32:09.399730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:47.052 [2024-12-05 13:32:09.400526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.624 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:47.624 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:47.624 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:47.624 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.625 [2024-12-05 13:32:10.115490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:47.625 [2024-12-05 13:32:10.123778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:47.625 null0 00:27:47.625 [2024-12-05 13:32:10.155719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1080547 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1080547 /tmp/host.sock 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1080547 ']' 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:47.625 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:47.625 13:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.885 [2024-12-05 13:32:10.231384] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:27:47.885 [2024-12-05 13:32:10.231445] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080547 ] 00:27:47.885 [2024-12-05 13:32:10.313200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.885 [2024-12-05 13:32:10.355158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.828 13:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:49.769 [2024-12-05 13:32:12.186070] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:49.769 [2024-12-05 13:32:12.186095] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:49.769 [2024-12-05 13:32:12.186110] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:49.769 [2024-12-05 13:32:12.272365] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:50.030 [2024-12-05 13:32:12.374287] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:50.030 [2024-12-05 13:32:12.375219] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24024d0:1 started. 00:27:50.030 [2024-12-05 13:32:12.376810] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:50.030 [2024-12-05 13:32:12.376852] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:50.030 [2024-12-05 13:32:12.376880] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:50.030 [2024-12-05 13:32:12.376894] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:50.030 [2024-12-05 13:32:12.376915] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.030 [2024-12-05 13:32:12.383711] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24024d0 was disconnected and freed. delete nvme_qpair. 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.030 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.291 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.291 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:50.291 13:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:51.231 13:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:52.174 13:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:53.555 13:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:54.498 13:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:55.441 [2024-12-05 13:32:17.817538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:55.441 [2024-12-05 13:32:17.817585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.441 [2024-12-05 13:32:17.817597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-12-05 13:32:17.817608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.441 [2024-12-05 13:32:17.817616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-12-05 13:32:17.817624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.441 [2024-12-05 13:32:17.817632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-12-05 13:32:17.817640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.441 [2024-12-05 13:32:17.817647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-12-05 13:32:17.817655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.441 [2024-12-05 13:32:17.817663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.441 [2024-12-05 13:32:17.817670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23deea0 is same with the state(6) to be set 00:27:55.441 [2024-12-05 13:32:17.827557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23deea0 (9): Bad file descriptor 00:27:55.441 [2024-12-05 13:32:17.837592] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:55.441 [2024-12-05 13:32:17.837605] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:55.441 [2024-12-05 13:32:17.837610] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:55.441 [2024-12-05 13:32:17.837615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:55.441 [2024-12-05 13:32:17.837637] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:55.441 13:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:55.441 13:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:55.441 13:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:55.441 13:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:55.441 13:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:55.441 13:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.441 13:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.405 [2024-12-05 13:32:18.843896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:56.405 [2024-12-05 13:32:18.843935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23deea0 with addr=10.0.0.2, port=4420 00:27:56.405 [2024-12-05 13:32:18.843947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23deea0 is same with the state(6) to be set 00:27:56.405 [2024-12-05 13:32:18.843973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23deea0 (9): Bad file descriptor 00:27:56.405 [2024-12-05 13:32:18.844340] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:56.405 [2024-12-05 13:32:18.844364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:56.405 [2024-12-05 13:32:18.844372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:56.405 [2024-12-05 13:32:18.844381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:56.405 [2024-12-05 13:32:18.844388] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:56.405 [2024-12-05 13:32:18.844393] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:56.405 [2024-12-05 13:32:18.844398] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:56.405 [2024-12-05 13:32:18.844406] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:56.405 [2024-12-05 13:32:18.844411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:56.405 13:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.405 13:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:56.405 13:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:57.348 [2024-12-05 13:32:19.846782] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:57.348 [2024-12-05 13:32:19.846801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:57.348 [2024-12-05 13:32:19.846812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:57.348 [2024-12-05 13:32:19.846820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:57.348 [2024-12-05 13:32:19.846827] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:57.348 [2024-12-05 13:32:19.846835] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:57.348 [2024-12-05 13:32:19.846840] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:57.348 [2024-12-05 13:32:19.846844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:57.348 [2024-12-05 13:32:19.846869] bdev_nvme.c:7235:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:57.348 [2024-12-05 13:32:19.846890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.348 [2024-12-05 13:32:19.846900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.348 [2024-12-05 13:32:19.846910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.348 [2024-12-05 13:32:19.846918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.348 [2024-12-05 13:32:19.846926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.348 [2024-12-05 13:32:19.846934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.348 [2024-12-05 13:32:19.846942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.348 [2024-12-05 13:32:19.846953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.348 [2024-12-05 13:32:19.846961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:57.348 [2024-12-05 13:32:19.846969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:57.348 [2024-12-05 13:32:19.846977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:57.348 [2024-12-05 13:32:19.847214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ce1e0 (9): Bad file descriptor 00:27:57.348 [2024-12-05 13:32:19.848227] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:57.348 [2024-12-05 13:32:19.848238] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.348 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.610 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:57.610 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:57.610 13:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:57.610 13:32:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:58.552 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.812 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:58.812 13:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:59.583 [2024-12-05 13:32:21.899051] bdev_nvme.c:7484:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:59.584 [2024-12-05 13:32:21.899071] bdev_nvme.c:7570:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:59.584 [2024-12-05 13:32:21.899084] bdev_nvme.c:7447:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:59.584 [2024-12-05 13:32:21.986345] bdev_nvme.c:7413:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:59.915 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:59.916 13:32:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:59.916 [2024-12-05 13:32:22.208564] bdev_nvme.c:5636:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:59.916 [2024-12-05 13:32:22.209508] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x23b8120:1 started. 00:27:59.916 [2024-12-05 13:32:22.210736] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:59.916 [2024-12-05 13:32:22.210772] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:59.916 [2024-12-05 13:32:22.210791] bdev_nvme.c:8280:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:59.916 [2024-12-05 13:32:22.210804] bdev_nvme.c:7303:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:59.916 [2024-12-05 13:32:22.210812] bdev_nvme.c:7262:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:59.916 [2024-12-05 13:32:22.217772] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x23b8120 was disconnected and freed. delete nvme_qpair. 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1080547 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1080547 ']' 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1080547 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1080547 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1080547' 00:28:00.864 killing process with pid 1080547 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1080547 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1080547 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:00.864 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:00.864 rmmod nvme_tcp 00:28:01.126 rmmod nvme_fabrics 00:28:01.126 rmmod nvme_keyring 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1080490 ']' 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1080490 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1080490 ']' 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1080490 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1080490 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1080490' 00:28:01.126 killing process with pid 1080490 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1080490 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1080490 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:01.126 13:32:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:03.670 00:28:03.670 real 0m25.213s 00:28:03.670 user 0m29.469s 00:28:03.670 sys 0m7.825s 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.670 ************************************ 00:28:03.670 END TEST nvmf_discovery_remove_ifc 00:28:03.670 ************************************ 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.670 ************************************ 00:28:03.670 START TEST nvmf_identify_kernel_target 00:28:03.670 ************************************ 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:03.670 * Looking for test storage... 00:28:03.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:03.670 13:32:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.670 --rc genhtml_branch_coverage=1 00:28:03.670 --rc genhtml_function_coverage=1 00:28:03.670 --rc genhtml_legend=1 00:28:03.670 --rc geninfo_all_blocks=1 00:28:03.670 --rc geninfo_unexecuted_blocks=1 00:28:03.670 00:28:03.670 ' 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.670 --rc genhtml_branch_coverage=1 00:28:03.670 --rc genhtml_function_coverage=1 00:28:03.670 --rc genhtml_legend=1 00:28:03.670 --rc geninfo_all_blocks=1 00:28:03.670 --rc geninfo_unexecuted_blocks=1 00:28:03.670 00:28:03.670 ' 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.670 --rc genhtml_branch_coverage=1 00:28:03.670 --rc genhtml_function_coverage=1 00:28:03.670 --rc genhtml_legend=1 00:28:03.670 --rc geninfo_all_blocks=1 00:28:03.670 --rc geninfo_unexecuted_blocks=1 00:28:03.670 00:28:03.670 ' 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:03.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.670 --rc genhtml_branch_coverage=1 00:28:03.670 --rc genhtml_function_coverage=1 00:28:03.670 --rc genhtml_legend=1 00:28:03.670 --rc geninfo_all_blocks=1 00:28:03.670 --rc geninfo_unexecuted_blocks=1 00:28:03.670 00:28:03.670 ' 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.670 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.671 13:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:11.816 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:11.817 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:11.817 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:11.817 Found net devices under 0000:31:00.0: cvl_0_0 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:11.817 Found net devices under 0000:31:00.1: cvl_0_1 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:11.817 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:28:12.080 00:28:12.080 --- 10.0.0.2 ping statistics --- 00:28:12.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.080 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:28:12.080 00:28:12.080 --- 10.0.0.1 ping statistics --- 00:28:12.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.080 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:12.080 13:32:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:16.291 Waiting for block devices as requested 00:28:16.291 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:16.291 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:16.291 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:16.291 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:16.291 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:16.291 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:16.553 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:16.553 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:16.553 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:16.814 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:16.814 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:16.814 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:17.075 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:17.075 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:17.075 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:17.075 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:17.335 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:17.597 No valid GPT data, bailing 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:17.597 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:17.861 00:28:17.861 Discovery Log Number of Records 2, Generation counter 2 00:28:17.861 =====Discovery Log Entry 0====== 00:28:17.861 trtype: tcp 00:28:17.861 adrfam: ipv4 00:28:17.861 subtype: current discovery subsystem 00:28:17.861 treq: not specified, sq flow control disable supported 00:28:17.861 portid: 1 00:28:17.861 trsvcid: 4420 00:28:17.861 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:17.861 traddr: 10.0.0.1 00:28:17.861 eflags: none 00:28:17.861 sectype: none 00:28:17.861 =====Discovery Log Entry 1====== 00:28:17.861 trtype: tcp 00:28:17.861 adrfam: ipv4 00:28:17.861 subtype: nvme subsystem 00:28:17.861 treq: not specified, sq flow control disable supported 00:28:17.861 portid: 1 00:28:17.861 trsvcid: 4420 00:28:17.861 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:17.861 traddr: 10.0.0.1 00:28:17.861 eflags: none 00:28:17.861 sectype: none 00:28:17.861 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:17.861 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:17.861 ===================================================== 00:28:17.861 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:17.861 ===================================================== 00:28:17.861 Controller Capabilities/Features 00:28:17.861 ================================ 00:28:17.861 Vendor ID: 0000 00:28:17.861 Subsystem Vendor ID: 0000 00:28:17.861 Serial Number: 397c9696921a848b87e7 00:28:17.861 Model Number: Linux 00:28:17.861 Firmware Version: 6.8.9-20 00:28:17.861 Recommended Arb Burst: 0 00:28:17.861 IEEE OUI Identifier: 00 00 00 00:28:17.861 Multi-path I/O 00:28:17.861 May have multiple subsystem ports: No 00:28:17.861 May have multiple controllers: No 00:28:17.861 Associated with SR-IOV VF: No 00:28:17.861 Max Data Transfer Size: Unlimited 00:28:17.861 Max Number of Namespaces: 0 00:28:17.861 Max Number of I/O Queues: 1024 00:28:17.861 NVMe Specification Version (VS): 1.3 00:28:17.861 NVMe Specification Version (Identify): 1.3 00:28:17.861 Maximum Queue Entries: 1024 00:28:17.861 Contiguous Queues Required: No 00:28:17.861 Arbitration Mechanisms Supported 00:28:17.861 Weighted Round Robin: Not Supported 00:28:17.861 Vendor Specific: Not Supported 00:28:17.861 Reset Timeout: 7500 ms 00:28:17.861 Doorbell Stride: 4 bytes 00:28:17.861 NVM Subsystem Reset: Not Supported 00:28:17.861 Command Sets Supported 00:28:17.861 NVM Command Set: Supported 00:28:17.861 Boot Partition: Not Supported 00:28:17.861 Memory Page Size Minimum: 4096 bytes 00:28:17.861 Memory Page Size Maximum: 4096 bytes 00:28:17.861 Persistent Memory Region: Not Supported 00:28:17.861 Optional Asynchronous Events Supported 00:28:17.861 Namespace Attribute Notices: Not Supported 00:28:17.861 Firmware Activation Notices: Not Supported 00:28:17.861 ANA Change Notices: Not Supported 00:28:17.861 PLE Aggregate Log Change Notices: Not Supported 00:28:17.861 LBA Status Info Alert Notices: Not Supported 00:28:17.861 EGE Aggregate Log Change Notices: Not Supported 00:28:17.861 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.861 Zone Descriptor Change Notices: Not Supported 00:28:17.861 Discovery Log Change Notices: Supported 00:28:17.861 Controller Attributes 00:28:17.861 128-bit Host Identifier: Not Supported 00:28:17.861 Non-Operational Permissive Mode: Not Supported 00:28:17.861 NVM Sets: Not Supported 00:28:17.861 Read Recovery Levels: Not Supported 00:28:17.861 Endurance Groups: Not Supported 00:28:17.861 Predictable Latency Mode: Not Supported 00:28:17.861 Traffic Based Keep ALive: Not Supported 00:28:17.861 Namespace Granularity: Not Supported 00:28:17.861 SQ Associations: Not Supported 00:28:17.861 UUID List: Not Supported 00:28:17.861 Multi-Domain Subsystem: Not Supported 00:28:17.861 Fixed Capacity Management: Not Supported 00:28:17.861 Variable Capacity Management: Not Supported 00:28:17.861 Delete Endurance Group: Not Supported 00:28:17.861 Delete NVM Set: Not Supported 00:28:17.861 Extended LBA Formats Supported: Not Supported 00:28:17.861 Flexible Data Placement Supported: Not Supported 00:28:17.861 00:28:17.861 Controller Memory Buffer Support 00:28:17.861 ================================ 00:28:17.861 Supported: No 00:28:17.861 00:28:17.861 Persistent Memory Region Support 00:28:17.861 ================================ 00:28:17.861 Supported: No 00:28:17.861 00:28:17.861 Admin Command Set Attributes 00:28:17.861 ============================ 00:28:17.861 Security Send/Receive: Not Supported 00:28:17.861 Format NVM: Not Supported 00:28:17.861 Firmware Activate/Download: Not Supported 00:28:17.861 Namespace Management: Not Supported 00:28:17.861 Device Self-Test: Not Supported 00:28:17.861 Directives: Not Supported 00:28:17.861 NVMe-MI: Not Supported 00:28:17.861 Virtualization Management: Not Supported 00:28:17.861 Doorbell Buffer Config: Not Supported 00:28:17.861 Get LBA Status Capability: Not Supported 00:28:17.861 Command & Feature Lockdown Capability: Not Supported 00:28:17.861 Abort Command Limit: 1 00:28:17.861 Async Event Request Limit: 1 00:28:17.861 Number of Firmware Slots: N/A 00:28:17.861 Firmware Slot 1 Read-Only: N/A 00:28:17.861 Firmware Activation Without Reset: N/A 00:28:17.861 Multiple Update Detection Support: N/A 00:28:17.861 Firmware Update Granularity: No Information Provided 00:28:17.861 Per-Namespace SMART Log: No 00:28:17.861 Asymmetric Namespace Access Log Page: Not Supported 00:28:17.861 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:17.861 Command Effects Log Page: Not Supported 00:28:17.861 Get Log Page Extended Data: Supported 00:28:17.861 Telemetry Log Pages: Not Supported 00:28:17.861 Persistent Event Log Pages: Not Supported 00:28:17.861 Supported Log Pages Log Page: May Support 00:28:17.861 Commands Supported & Effects Log Page: Not Supported 00:28:17.861 Feature Identifiers & Effects Log Page:May Support 00:28:17.861 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.861 Data Area 4 for Telemetry Log: Not Supported 00:28:17.861 Error Log Page Entries Supported: 1 00:28:17.861 Keep Alive: Not Supported 00:28:17.861 00:28:17.861 NVM Command Set Attributes 00:28:17.861 ========================== 00:28:17.861 Submission Queue Entry Size 00:28:17.861 Max: 1 00:28:17.861 Min: 1 00:28:17.861 Completion Queue Entry Size 00:28:17.861 Max: 1 00:28:17.861 Min: 1 00:28:17.861 Number of Namespaces: 0 00:28:17.861 Compare Command: Not Supported 00:28:17.861 Write Uncorrectable Command: Not Supported 00:28:17.861 Dataset Management Command: Not Supported 00:28:17.861 Write Zeroes Command: Not Supported 00:28:17.861 Set Features Save Field: Not Supported 00:28:17.861 Reservations: Not Supported 00:28:17.861 Timestamp: Not Supported 00:28:17.861 Copy: Not Supported 00:28:17.861 Volatile Write Cache: Not Present 00:28:17.861 Atomic Write Unit (Normal): 1 00:28:17.861 Atomic Write Unit (PFail): 1 00:28:17.861 Atomic Compare & Write Unit: 1 00:28:17.861 Fused Compare & Write: Not Supported 00:28:17.862 Scatter-Gather List 00:28:17.862 SGL Command Set: Supported 00:28:17.862 SGL Keyed: Not Supported 00:28:17.862 SGL Bit Bucket Descriptor: Not Supported 00:28:17.862 SGL Metadata Pointer: Not Supported 00:28:17.862 Oversized SGL: Not Supported 00:28:17.862 SGL Metadata Address: Not Supported 00:28:17.862 SGL Offset: Supported 00:28:17.862 Transport SGL Data Block: Not Supported 00:28:17.862 Replay Protected Memory Block: Not Supported 00:28:17.862 00:28:17.862 Firmware Slot Information 00:28:17.862 ========================= 00:28:17.862 Active slot: 0 00:28:17.862 00:28:17.862 00:28:17.862 Error Log 00:28:17.862 ========= 00:28:17.862 00:28:17.862 Active Namespaces 00:28:17.862 ================= 00:28:17.862 Discovery Log Page 00:28:17.862 ================== 00:28:17.862 Generation Counter: 2 00:28:17.862 Number of Records: 2 00:28:17.862 Record Format: 0 00:28:17.862 00:28:17.862 Discovery Log Entry 0 00:28:17.862 ---------------------- 00:28:17.862 Transport Type: 3 (TCP) 00:28:17.862 Address Family: 1 (IPv4) 00:28:17.862 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:17.862 Entry Flags: 00:28:17.862 Duplicate Returned Information: 0 00:28:17.862 Explicit Persistent Connection Support for Discovery: 0 00:28:17.862 Transport Requirements: 00:28:17.862 Secure Channel: Not Specified 00:28:17.862 Port ID: 1 (0x0001) 00:28:17.862 Controller ID: 65535 (0xffff) 00:28:17.862 Admin Max SQ Size: 32 00:28:17.862 Transport Service Identifier: 4420 00:28:17.862 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:17.862 Transport Address: 10.0.0.1 00:28:17.862 Discovery Log Entry 1 00:28:17.862 ---------------------- 00:28:17.862 Transport Type: 3 (TCP) 00:28:17.862 Address Family: 1 (IPv4) 00:28:17.862 Subsystem Type: 2 (NVM Subsystem) 00:28:17.862 Entry Flags: 00:28:17.862 Duplicate Returned Information: 0 00:28:17.862 Explicit Persistent Connection Support for Discovery: 0 00:28:17.862 Transport Requirements: 00:28:17.862 Secure Channel: Not Specified 00:28:17.862 Port ID: 1 (0x0001) 00:28:17.862 Controller ID: 65535 (0xffff) 00:28:17.862 Admin Max SQ Size: 32 00:28:17.862 Transport Service Identifier: 4420 00:28:17.862 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:17.862 Transport Address: 10.0.0.1 00:28:17.862 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:17.862 get_feature(0x01) failed 00:28:17.862 get_feature(0x02) failed 00:28:17.862 get_feature(0x04) failed 00:28:17.862 ===================================================== 00:28:17.862 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:17.862 ===================================================== 00:28:17.862 Controller Capabilities/Features 00:28:17.862 ================================ 00:28:17.862 Vendor ID: 0000 00:28:17.862 Subsystem Vendor ID: 0000 00:28:17.862 Serial Number: 2abab5ac1a60b8110ec4 00:28:17.862 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:17.862 Firmware Version: 6.8.9-20 00:28:17.862 Recommended Arb Burst: 6 00:28:17.862 IEEE OUI Identifier: 00 00 00 00:28:17.862 Multi-path I/O 00:28:17.862 May have multiple subsystem ports: Yes 00:28:17.862 May have multiple controllers: Yes 00:28:17.862 Associated with SR-IOV VF: No 00:28:17.862 Max Data Transfer Size: Unlimited 00:28:17.862 Max Number of Namespaces: 1024 00:28:17.862 Max Number of I/O Queues: 128 00:28:17.862 NVMe Specification Version (VS): 1.3 00:28:17.862 NVMe Specification Version (Identify): 1.3 00:28:17.862 Maximum Queue Entries: 1024 00:28:17.862 Contiguous Queues Required: No 00:28:17.862 Arbitration Mechanisms Supported 00:28:17.862 Weighted Round Robin: Not Supported 00:28:17.862 Vendor Specific: Not Supported 00:28:17.862 Reset Timeout: 7500 ms 00:28:17.862 Doorbell Stride: 4 bytes 00:28:17.862 NVM Subsystem Reset: Not Supported 00:28:17.862 Command Sets Supported 00:28:17.862 NVM Command Set: Supported 00:28:17.862 Boot Partition: Not Supported 00:28:17.862 Memory Page Size Minimum: 4096 bytes 00:28:17.862 Memory Page Size Maximum: 4096 bytes 00:28:17.862 Persistent Memory Region: Not Supported 00:28:17.862 Optional Asynchronous Events Supported 00:28:17.862 Namespace Attribute Notices: Supported 00:28:17.862 Firmware Activation Notices: Not Supported 00:28:17.862 ANA Change Notices: Supported 00:28:17.862 PLE Aggregate Log Change Notices: Not Supported 00:28:17.862 LBA Status Info Alert Notices: Not Supported 00:28:17.862 EGE Aggregate Log Change Notices: Not Supported 00:28:17.862 Normal NVM Subsystem Shutdown event: Not Supported 00:28:17.862 Zone Descriptor Change Notices: Not Supported 00:28:17.862 Discovery Log Change Notices: Not Supported 00:28:17.862 Controller Attributes 00:28:17.862 128-bit Host Identifier: Supported 00:28:17.862 Non-Operational Permissive Mode: Not Supported 00:28:17.862 NVM Sets: Not Supported 00:28:17.862 Read Recovery Levels: Not Supported 00:28:17.862 Endurance Groups: Not Supported 00:28:17.862 Predictable Latency Mode: Not Supported 00:28:17.862 Traffic Based Keep ALive: Supported 00:28:17.862 Namespace Granularity: Not Supported 00:28:17.862 SQ Associations: Not Supported 00:28:17.862 UUID List: Not Supported 00:28:17.862 Multi-Domain Subsystem: Not Supported 00:28:17.862 Fixed Capacity Management: Not Supported 00:28:17.862 Variable Capacity Management: Not Supported 00:28:17.862 Delete Endurance Group: Not Supported 00:28:17.862 Delete NVM Set: Not Supported 00:28:17.862 Extended LBA Formats Supported: Not Supported 00:28:17.862 Flexible Data Placement Supported: Not Supported 00:28:17.862 00:28:17.862 Controller Memory Buffer Support 00:28:17.862 ================================ 00:28:17.862 Supported: No 00:28:17.862 00:28:17.862 Persistent Memory Region Support 00:28:17.862 ================================ 00:28:17.862 Supported: No 00:28:17.862 00:28:17.862 Admin Command Set Attributes 00:28:17.862 ============================ 00:28:17.862 Security Send/Receive: Not Supported 00:28:17.862 Format NVM: Not Supported 00:28:17.862 Firmware Activate/Download: Not Supported 00:28:17.862 Namespace Management: Not Supported 00:28:17.862 Device Self-Test: Not Supported 00:28:17.862 Directives: Not Supported 00:28:17.862 NVMe-MI: Not Supported 00:28:17.862 Virtualization Management: Not Supported 00:28:17.862 Doorbell Buffer Config: Not Supported 00:28:17.862 Get LBA Status Capability: Not Supported 00:28:17.862 Command & Feature Lockdown Capability: Not Supported 00:28:17.862 Abort Command Limit: 4 00:28:17.862 Async Event Request Limit: 4 00:28:17.862 Number of Firmware Slots: N/A 00:28:17.862 Firmware Slot 1 Read-Only: N/A 00:28:17.862 Firmware Activation Without Reset: N/A 00:28:17.862 Multiple Update Detection Support: N/A 00:28:17.862 Firmware Update Granularity: No Information Provided 00:28:17.862 Per-Namespace SMART Log: Yes 00:28:17.862 Asymmetric Namespace Access Log Page: Supported 00:28:17.862 ANA Transition Time : 10 sec 00:28:17.862 00:28:17.862 Asymmetric Namespace Access Capabilities 00:28:17.862 ANA Optimized State : Supported 00:28:17.862 ANA Non-Optimized State : Supported 00:28:17.862 ANA Inaccessible State : Supported 00:28:17.862 ANA Persistent Loss State : Supported 00:28:17.862 ANA Change State : Supported 00:28:17.862 ANAGRPID is not changed : No 00:28:17.862 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:17.862 00:28:17.862 ANA Group Identifier Maximum : 128 00:28:17.862 Number of ANA Group Identifiers : 128 00:28:17.862 Max Number of Allowed Namespaces : 1024 00:28:17.862 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:17.862 Command Effects Log Page: Supported 00:28:17.862 Get Log Page Extended Data: Supported 00:28:17.862 Telemetry Log Pages: Not Supported 00:28:17.862 Persistent Event Log Pages: Not Supported 00:28:17.862 Supported Log Pages Log Page: May Support 00:28:17.862 Commands Supported & Effects Log Page: Not Supported 00:28:17.862 Feature Identifiers & Effects Log Page:May Support 00:28:17.862 NVMe-MI Commands & Effects Log Page: May Support 00:28:17.862 Data Area 4 for Telemetry Log: Not Supported 00:28:17.862 Error Log Page Entries Supported: 128 00:28:17.862 Keep Alive: Supported 00:28:17.862 Keep Alive Granularity: 1000 ms 00:28:17.862 00:28:17.862 NVM Command Set Attributes 00:28:17.862 ========================== 00:28:17.862 Submission Queue Entry Size 00:28:17.862 Max: 64 00:28:17.862 Min: 64 00:28:17.862 Completion Queue Entry Size 00:28:17.862 Max: 16 00:28:17.862 Min: 16 00:28:17.862 Number of Namespaces: 1024 00:28:17.862 Compare Command: Not Supported 00:28:17.862 Write Uncorrectable Command: Not Supported 00:28:17.862 Dataset Management Command: Supported 00:28:17.862 Write Zeroes Command: Supported 00:28:17.863 Set Features Save Field: Not Supported 00:28:17.863 Reservations: Not Supported 00:28:17.863 Timestamp: Not Supported 00:28:17.863 Copy: Not Supported 00:28:17.863 Volatile Write Cache: Present 00:28:17.863 Atomic Write Unit (Normal): 1 00:28:17.863 Atomic Write Unit (PFail): 1 00:28:17.863 Atomic Compare & Write Unit: 1 00:28:17.863 Fused Compare & Write: Not Supported 00:28:17.863 Scatter-Gather List 00:28:17.863 SGL Command Set: Supported 00:28:17.863 SGL Keyed: Not Supported 00:28:17.863 SGL Bit Bucket Descriptor: Not Supported 00:28:17.863 SGL Metadata Pointer: Not Supported 00:28:17.863 Oversized SGL: Not Supported 00:28:17.863 SGL Metadata Address: Not Supported 00:28:17.863 SGL Offset: Supported 00:28:17.863 Transport SGL Data Block: Not Supported 00:28:17.863 Replay Protected Memory Block: Not Supported 00:28:17.863 00:28:17.863 Firmware Slot Information 00:28:17.863 ========================= 00:28:17.863 Active slot: 0 00:28:17.863 00:28:17.863 Asymmetric Namespace Access 00:28:17.863 =========================== 00:28:17.863 Change Count : 0 00:28:17.863 Number of ANA Group Descriptors : 1 00:28:17.863 ANA Group Descriptor : 0 00:28:17.863 ANA Group ID : 1 00:28:17.863 Number of NSID Values : 1 00:28:17.863 Change Count : 0 00:28:17.863 ANA State : 1 00:28:17.863 Namespace Identifier : 1 00:28:17.863 00:28:17.863 Commands Supported and Effects 00:28:17.863 ============================== 00:28:17.863 Admin Commands 00:28:17.863 -------------- 00:28:17.863 Get Log Page (02h): Supported 00:28:17.863 Identify (06h): Supported 00:28:17.863 Abort (08h): Supported 00:28:17.863 Set Features (09h): Supported 00:28:17.863 Get Features (0Ah): Supported 00:28:17.863 Asynchronous Event Request (0Ch): Supported 00:28:17.863 Keep Alive (18h): Supported 00:28:17.863 I/O Commands 00:28:17.863 ------------ 00:28:17.863 Flush (00h): Supported 00:28:17.863 Write (01h): Supported LBA-Change 00:28:17.863 Read (02h): Supported 00:28:17.863 Write Zeroes (08h): Supported LBA-Change 00:28:17.863 Dataset Management (09h): Supported 00:28:17.863 00:28:17.863 Error Log 00:28:17.863 ========= 00:28:17.863 Entry: 0 00:28:17.863 Error Count: 0x3 00:28:17.863 Submission Queue Id: 0x0 00:28:17.863 Command Id: 0x5 00:28:17.863 Phase Bit: 0 00:28:17.863 Status Code: 0x2 00:28:17.863 Status Code Type: 0x0 00:28:17.863 Do Not Retry: 1 00:28:17.863 Error Location: 0x28 00:28:17.863 LBA: 0x0 00:28:17.863 Namespace: 0x0 00:28:17.863 Vendor Log Page: 0x0 00:28:17.863 ----------- 00:28:17.863 Entry: 1 00:28:17.863 Error Count: 0x2 00:28:17.863 Submission Queue Id: 0x0 00:28:17.863 Command Id: 0x5 00:28:17.863 Phase Bit: 0 00:28:17.863 Status Code: 0x2 00:28:17.863 Status Code Type: 0x0 00:28:17.863 Do Not Retry: 1 00:28:17.863 Error Location: 0x28 00:28:17.863 LBA: 0x0 00:28:17.863 Namespace: 0x0 00:28:17.863 Vendor Log Page: 0x0 00:28:17.863 ----------- 00:28:17.863 Entry: 2 00:28:17.863 Error Count: 0x1 00:28:17.863 Submission Queue Id: 0x0 00:28:17.863 Command Id: 0x4 00:28:17.863 Phase Bit: 0 00:28:17.863 Status Code: 0x2 00:28:17.863 Status Code Type: 0x0 00:28:17.863 Do Not Retry: 1 00:28:17.863 Error Location: 0x28 00:28:17.863 LBA: 0x0 00:28:17.863 Namespace: 0x0 00:28:17.863 Vendor Log Page: 0x0 00:28:17.863 00:28:17.863 Number of Queues 00:28:17.863 ================ 00:28:17.863 Number of I/O Submission Queues: 128 00:28:17.863 Number of I/O Completion Queues: 128 00:28:17.863 00:28:17.863 ZNS Specific Controller Data 00:28:17.863 ============================ 00:28:17.863 Zone Append Size Limit: 0 00:28:17.863 00:28:17.863 00:28:17.863 Active Namespaces 00:28:17.863 ================= 00:28:17.863 get_feature(0x05) failed 00:28:17.863 Namespace ID:1 00:28:17.863 Command Set Identifier: NVM (00h) 00:28:17.863 Deallocate: Supported 00:28:17.863 Deallocated/Unwritten Error: Not Supported 00:28:17.863 Deallocated Read Value: Unknown 00:28:17.863 Deallocate in Write Zeroes: Not Supported 00:28:17.863 Deallocated Guard Field: 0xFFFF 00:28:17.863 Flush: Supported 00:28:17.863 Reservation: Not Supported 00:28:17.863 Namespace Sharing Capabilities: Multiple Controllers 00:28:17.863 Size (in LBAs): 3750748848 (1788GiB) 00:28:17.863 Capacity (in LBAs): 3750748848 (1788GiB) 00:28:17.863 Utilization (in LBAs): 3750748848 (1788GiB) 00:28:17.863 UUID: b2709c6e-b927-471d-b8b7-84156c0b8195 00:28:17.863 Thin Provisioning: Not Supported 00:28:17.863 Per-NS Atomic Units: Yes 00:28:17.863 Atomic Write Unit (Normal): 8 00:28:17.863 Atomic Write Unit (PFail): 8 00:28:17.863 Preferred Write Granularity: 8 00:28:17.863 Atomic Compare & Write Unit: 8 00:28:17.863 Atomic Boundary Size (Normal): 0 00:28:17.863 Atomic Boundary Size (PFail): 0 00:28:17.863 Atomic Boundary Offset: 0 00:28:17.863 NGUID/EUI64 Never Reused: No 00:28:17.863 ANA group ID: 1 00:28:17.863 Namespace Write Protected: No 00:28:17.863 Number of LBA Formats: 1 00:28:17.863 Current LBA Format: LBA Format #00 00:28:17.863 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:17.863 00:28:17.863 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:17.863 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.863 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:17.863 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.863 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:17.863 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.863 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.863 rmmod nvme_tcp 00:28:18.124 rmmod nvme_fabrics 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.124 13:32:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:20.038 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:20.298 13:32:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:24.503 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:24.503 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:24.503 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:24.503 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:24.503 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:24.503 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:24.503 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:24.503 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:24.504 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:24.764 00:28:24.764 real 0m21.279s 00:28:24.764 user 0m5.958s 00:28:24.764 sys 0m12.473s 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:24.764 ************************************ 00:28:24.764 END TEST nvmf_identify_kernel_target 00:28:24.764 ************************************ 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.764 ************************************ 00:28:24.764 START TEST nvmf_auth_host 00:28:24.764 ************************************ 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:24.764 * Looking for test storage... 00:28:24.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:24.764 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:25.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.024 --rc genhtml_branch_coverage=1 00:28:25.024 --rc genhtml_function_coverage=1 00:28:25.024 --rc genhtml_legend=1 00:28:25.024 --rc geninfo_all_blocks=1 00:28:25.024 --rc geninfo_unexecuted_blocks=1 00:28:25.024 00:28:25.024 ' 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:25.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.024 --rc genhtml_branch_coverage=1 00:28:25.024 --rc genhtml_function_coverage=1 00:28:25.024 --rc genhtml_legend=1 00:28:25.024 --rc geninfo_all_blocks=1 00:28:25.024 --rc geninfo_unexecuted_blocks=1 00:28:25.024 00:28:25.024 ' 00:28:25.024 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:25.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.024 --rc genhtml_branch_coverage=1 00:28:25.024 --rc genhtml_function_coverage=1 00:28:25.025 --rc genhtml_legend=1 00:28:25.025 --rc geninfo_all_blocks=1 00:28:25.025 --rc geninfo_unexecuted_blocks=1 00:28:25.025 00:28:25.025 ' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:25.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.025 --rc genhtml_branch_coverage=1 00:28:25.025 --rc genhtml_function_coverage=1 00:28:25.025 --rc genhtml_legend=1 00:28:25.025 --rc geninfo_all_blocks=1 00:28:25.025 --rc geninfo_unexecuted_blocks=1 00:28:25.025 00:28:25.025 ' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:25.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.025 13:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:33.161 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:33.161 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:33.161 Found net devices under 0000:31:00.0: cvl_0_0 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:33.161 Found net devices under 0000:31:00.1: cvl_0_1 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.161 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.162 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:33.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.575 ms 00:28:33.422 00:28:33.422 --- 10.0.0.2 ping statistics --- 00:28:33.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.422 rtt min/avg/max/mdev = 0.575/0.575/0.575/0.000 ms 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:28:33.422 00:28:33.422 --- 10.0.0.1 ping statistics --- 00:28:33.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.422 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1096629 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1096629 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1096629 ']' 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.422 13:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e0b5402c67f5aa3e6a6c7c93e4ebc64e 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.yn0 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e0b5402c67f5aa3e6a6c7c93e4ebc64e 0 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e0b5402c67f5aa3e6a6c7c93e4ebc64e 0 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e0b5402c67f5aa3e6a6c7c93e4ebc64e 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:33.682 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.yn0 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.yn0 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.yn0 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a4bbb21b91e5267bfffff26179dffece3ccbe3b0a7c405e7b0448c008031535d 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.HKV 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a4bbb21b91e5267bfffff26179dffece3ccbe3b0a7c405e7b0448c008031535d 3 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a4bbb21b91e5267bfffff26179dffece3ccbe3b0a7c405e7b0448c008031535d 3 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a4bbb21b91e5267bfffff26179dffece3ccbe3b0a7c405e7b0448c008031535d 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.HKV 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.HKV 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.HKV 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9a6d106ba5513b2a94a8831e0fe30df0493a658b2881ac45 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.BIh 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9a6d106ba5513b2a94a8831e0fe30df0493a658b2881ac45 0 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9a6d106ba5513b2a94a8831e0fe30df0493a658b2881ac45 0 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9a6d106ba5513b2a94a8831e0fe30df0493a658b2881ac45 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.BIh 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.BIh 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BIh 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b3627ce1233e5da78e6b0fbcce9b02f7598116fbb6dee9a9 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XNy 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b3627ce1233e5da78e6b0fbcce9b02f7598116fbb6dee9a9 2 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b3627ce1233e5da78e6b0fbcce9b02f7598116fbb6dee9a9 2 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b3627ce1233e5da78e6b0fbcce9b02f7598116fbb6dee9a9 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XNy 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XNy 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XNy 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bc29593a954021c7e66cab825ae7011d 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.frT 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bc29593a954021c7e66cab825ae7011d 1 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bc29593a954021c7e66cab825ae7011d 1 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bc29593a954021c7e66cab825ae7011d 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:33.943 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.frT 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.frT 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.frT 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b644473e3ccd6e82722d00762324b0bc 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZE7 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b644473e3ccd6e82722d00762324b0bc 1 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b644473e3ccd6e82722d00762324b0bc 1 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b644473e3ccd6e82722d00762324b0bc 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZE7 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZE7 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ZE7 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5926ae66947e3f6ad257081efa0f0cf9462e89ac9696c593 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Bdv 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5926ae66947e3f6ad257081efa0f0cf9462e89ac9696c593 2 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5926ae66947e3f6ad257081efa0f0cf9462e89ac9696c593 2 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5926ae66947e3f6ad257081efa0f0cf9462e89ac9696c593 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Bdv 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Bdv 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Bdv 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=730ce3714db11994e6c19aa11e24485a 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.C6U 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 730ce3714db11994e6c19aa11e24485a 0 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 730ce3714db11994e6c19aa11e24485a 0 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=730ce3714db11994e6c19aa11e24485a 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:34.205 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.C6U 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.C6U 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.C6U 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3f98052619f5fb5fed677b13c13a5af713604db353a4d97d83e76d00d6006d98 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.BKw 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3f98052619f5fb5fed677b13c13a5af713604db353a4d97d83e76d00d6006d98 3 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3f98052619f5fb5fed677b13c13a5af713604db353a4d97d83e76d00d6006d98 3 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3f98052619f5fb5fed677b13c13a5af713604db353a4d97d83e76d00d6006d98 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:34.206 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.BKw 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.BKw 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.BKw 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1096629 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1096629 ']' 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.yn0 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.HKV ]] 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.HKV 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BIh 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.466 13:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.466 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XNy ]] 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XNy 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.frT 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ZE7 ]] 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ZE7 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.467 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Bdv 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.C6U ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.C6U 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.BKw 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:34.727 13:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:38.925 Waiting for block devices as requested 00:28:38.926 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:38.926 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:39.186 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:39.186 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:39.447 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:39.447 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:39.447 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:39.447 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:39.708 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:39.708 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:40.650 No valid GPT data, bailing 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:40.650 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:40.650 00:28:40.650 Discovery Log Number of Records 2, Generation counter 2 00:28:40.650 =====Discovery Log Entry 0====== 00:28:40.650 trtype: tcp 00:28:40.650 adrfam: ipv4 00:28:40.650 subtype: current discovery subsystem 00:28:40.650 treq: not specified, sq flow control disable supported 00:28:40.650 portid: 1 00:28:40.650 trsvcid: 4420 00:28:40.650 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:40.650 traddr: 10.0.0.1 00:28:40.650 eflags: none 00:28:40.650 sectype: none 00:28:40.650 =====Discovery Log Entry 1====== 00:28:40.650 trtype: tcp 00:28:40.650 adrfam: ipv4 00:28:40.650 subtype: nvme subsystem 00:28:40.650 treq: not specified, sq flow control disable supported 00:28:40.651 portid: 1 00:28:40.651 trsvcid: 4420 00:28:40.651 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:40.651 traddr: 10.0.0.1 00:28:40.651 eflags: none 00:28:40.651 sectype: none 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:40.651 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.912 nvme0n1 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.912 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.913 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.174 nvme0n1 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.174 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.435 nvme0n1 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.435 13:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.695 nvme0n1 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.695 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.696 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.957 nvme0n1 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.957 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.958 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.958 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.958 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.958 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.958 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.958 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.958 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.218 nvme0n1 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.218 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.219 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.479 nvme0n1 00:28:42.479 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.479 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.479 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.479 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.479 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.479 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.479 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.480 13:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.740 nvme0n1 00:28:42.740 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.740 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.740 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.740 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.740 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.740 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.741 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.000 nvme0n1 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.001 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.262 nvme0n1 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.262 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 nvme0n1 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.524 13:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.786 nvme0n1 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:43.786 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.787 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.358 nvme0n1 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.358 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.619 nvme0n1 00:28:44.619 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.619 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.619 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.619 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.619 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.619 13:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.619 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.620 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.880 nvme0n1 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.880 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.881 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.141 nvme0n1 00:28:45.141 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.141 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.141 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.141 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.141 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.402 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.403 13:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.977 nvme0n1 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.977 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.238 nvme0n1 00:28:46.238 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.238 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.238 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.238 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.238 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.238 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.499 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.500 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.500 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:46.500 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.500 13:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.760 nvme0n1 00:28:46.760 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.022 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.023 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.594 nvme0n1 00:28:47.594 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.594 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.594 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.594 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.594 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.595 13:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.857 nvme0n1 00:28:47.857 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:48.119 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.120 13:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.693 nvme0n1 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.955 13:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.899 nvme0n1 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.899 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.473 nvme0n1 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.473 13:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.473 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.474 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.419 nvme0n1 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:51.419 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.420 13:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.365 nvme0n1 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.365 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.366 nvme0n1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.366 13:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.628 nvme0n1 00:28:52.628 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.629 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.630 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.892 nvme0n1 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.892 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.153 nvme0n1 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.153 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.414 nvme0n1 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.414 13:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.675 nvme0n1 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.675 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.937 nvme0n1 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:53.937 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.938 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.199 nvme0n1 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.199 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.461 nvme0n1 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.461 13:33:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.722 nvme0n1 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.722 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.723 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.984 nvme0n1 00:28:54.984 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.984 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.984 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.984 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.984 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.984 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:55.246 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.247 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.509 nvme0n1 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.509 13:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.772 nvme0n1 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.772 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.034 nvme0n1 00:28:56.034 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.034 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.034 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.034 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.034 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.295 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.556 nvme0n1 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.556 13:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:56.556 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.557 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.129 nvme0n1 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:57.129 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.130 13:33:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.704 nvme0n1 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.704 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.278 nvme0n1 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.278 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.279 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.279 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:58.279 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.279 13:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.851 nvme0n1 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:58.851 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.852 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.113 nvme0n1 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.113 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.376 13:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.949 nvme0n1 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.949 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.211 13:33:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.783 nvme0n1 00:29:00.783 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.783 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.783 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:00.783 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.783 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.783 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.045 13:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.619 nvme0n1 00:29:01.619 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.619 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:01.619 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:01.619 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.619 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.619 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.880 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.451 nvme0n1 00:29:02.451 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.451 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:02.451 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:02.451 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.451 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.451 13:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.712 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.283 nvme0n1 00:29:03.283 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.283 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.283 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.283 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.283 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.283 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.544 13:33:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.544 nvme0n1 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.544 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.806 nvme0n1 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.806 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.068 nvme0n1 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:04.068 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.069 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.331 nvme0n1 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.331 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.332 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:04.332 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.332 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.592 nvme0n1 00:29:04.592 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.592 13:33:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.592 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.592 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.592 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.592 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.592 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.592 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.593 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.854 nvme0n1 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.854 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.115 nvme0n1 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.115 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.374 nvme0n1 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.375 13:33:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.635 nvme0n1 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.635 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.895 nvme0n1 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:05.895 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.896 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.465 nvme0n1 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.465 13:33:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.749 nvme0n1 00:29:06.749 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.749 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:06.749 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.749 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.750 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.012 nvme0n1 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.012 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.273 nvme0n1 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.273 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:07.534 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.535 13:33:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.796 nvme0n1 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:07.796 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.797 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 nvme0n1 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.368 13:33:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 nvme0n1 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.941 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.513 nvme0n1 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:09.513 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.514 13:33:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:09.776 nvme0n1 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.037 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.611 nvme0n1 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBiNTQwMmM2N2Y1YWEzZTZhNmM3YzkzZTRlYmM2NGWU7Ryz: 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YTRiYmIyMWI5MWU1MjY3YmZmZmZmMjYxNzlkZmZlY2UzY2NiZTNiMGE3YzQwNWU3YjA0NDhjMDA4MDMxNTM1ZOkNBmk=: 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.611 13:33:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 nvme0n1 00:29:11.185 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.185 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:11.185 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:11.185 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.185 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.185 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.445 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.446 13:33:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.018 nvme0n1 00:29:12.018 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.018 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:12.018 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:12.018 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.018 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.018 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:12.328 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.329 13:33:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.116 nvme0n1 00:29:13.116 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.116 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.116 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.116 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.116 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTkyNmFlNjY5NDdlM2Y2YWQyNTcwODFlZmEwZjBjZjk0NjJlODlhYzk2OTZjNTkzyrMKnA==: 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzMwY2UzNzE0ZGIxMTk5NGU2YzE5YWExMWUyNDQ4NWGFhwn4: 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.117 13:33:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.686 nvme0n1 00:29:13.686 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.686 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:13.686 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:13.686 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.686 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2Y5ODA1MjYxOWY1ZmI1ZmVkNjc3YjEzYzEzYTVhZjcxMzYwNGRiMzUzYTRkOTdkODNlNzZkMDBkNjAwNmQ5OEcvWME=: 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.946 13:33:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.516 nvme0n1 00:29:14.516 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.516 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.516 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:29:14.516 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.516 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.777 request: 00:29:14.777 { 00:29:14.777 "name": "nvme0", 00:29:14.777 "trtype": "tcp", 00:29:14.777 "traddr": "10.0.0.1", 00:29:14.777 "adrfam": "ipv4", 00:29:14.777 "trsvcid": "4420", 00:29:14.777 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:14.777 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:14.777 "prchk_reftag": false, 00:29:14.777 "prchk_guard": false, 00:29:14.777 "hdgst": false, 00:29:14.777 "ddgst": false, 00:29:14.777 "allow_unrecognized_csi": false, 00:29:14.777 "method": "bdev_nvme_attach_controller", 00:29:14.777 "req_id": 1 00:29:14.777 } 00:29:14.777 Got JSON-RPC error response 00:29:14.777 response: 00:29:14.777 { 00:29:14.777 "code": -5, 00:29:14.777 "message": "Input/output error" 00:29:14.777 } 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:29:14.777 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.778 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.039 request: 00:29:15.039 { 00:29:15.039 "name": "nvme0", 00:29:15.039 "trtype": "tcp", 00:29:15.039 "traddr": "10.0.0.1", 00:29:15.039 "adrfam": "ipv4", 00:29:15.039 "trsvcid": "4420", 00:29:15.039 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:15.039 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:15.039 "prchk_reftag": false, 00:29:15.039 "prchk_guard": false, 00:29:15.039 "hdgst": false, 00:29:15.039 "ddgst": false, 00:29:15.039 "dhchap_key": "key2", 00:29:15.039 "allow_unrecognized_csi": false, 00:29:15.039 "method": "bdev_nvme_attach_controller", 00:29:15.039 "req_id": 1 00:29:15.039 } 00:29:15.039 Got JSON-RPC error response 00:29:15.039 response: 00:29:15.039 { 00:29:15.039 "code": -5, 00:29:15.039 "message": "Input/output error" 00:29:15.039 } 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.039 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.039 request: 00:29:15.039 { 00:29:15.039 "name": "nvme0", 00:29:15.039 "trtype": "tcp", 00:29:15.039 "traddr": "10.0.0.1", 00:29:15.039 "adrfam": "ipv4", 00:29:15.039 "trsvcid": "4420", 00:29:15.039 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:29:15.039 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:29:15.039 "prchk_reftag": false, 00:29:15.039 "prchk_guard": false, 00:29:15.039 "hdgst": false, 00:29:15.040 "ddgst": false, 00:29:15.040 "dhchap_key": "key1", 00:29:15.040 "dhchap_ctrlr_key": "ckey2", 00:29:15.040 "allow_unrecognized_csi": false, 00:29:15.040 "method": "bdev_nvme_attach_controller", 00:29:15.040 "req_id": 1 00:29:15.040 } 00:29:15.040 Got JSON-RPC error response 00:29:15.040 response: 00:29:15.040 { 00:29:15.040 "code": -5, 00:29:15.040 "message": "Input/output error" 00:29:15.040 } 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.040 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.301 nvme0n1 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.301 request: 00:29:15.301 { 00:29:15.301 "name": "nvme0", 00:29:15.301 "dhchap_key": "key1", 00:29:15.301 "dhchap_ctrlr_key": "ckey2", 00:29:15.301 "method": "bdev_nvme_set_keys", 00:29:15.301 "req_id": 1 00:29:15.301 } 00:29:15.301 Got JSON-RPC error response 00:29:15.301 response: 00:29:15.301 { 00:29:15.301 "code": -13, 00:29:15.301 "message": "Permission denied" 00:29:15.301 } 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:15.301 13:33:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:16.687 13:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:16.687 13:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:16.687 13:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.687 13:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:16.687 13:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.687 13:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:29:16.687 13:33:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWE2ZDEwNmJhNTUxM2IyYTk0YTg4MzFlMGZlMzBkZjA0OTNhNjU4YjI4ODFhYzQ1fy4wfA==: 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: ]] 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YjM2MjdjZTEyMzNlNWRhNzhlNmIwZmJjY2U5YjAyZjc1OTgxMTZmYmI2ZGVlOWE5GOU62A==: 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.631 13:33:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.631 nvme0n1 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmMyOTU5M2E5NTQwMjFjN2U2NmNhYjgyNWFlNzAxMWTyT85O: 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: ]] 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjY0NDQ3M2UzY2NkNmU4MjcyMmQwMDc2MjMyNGIwYmO/1pUt: 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.631 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.894 request: 00:29:17.894 { 00:29:17.894 "name": "nvme0", 00:29:17.894 "dhchap_key": "key2", 00:29:17.894 "dhchap_ctrlr_key": "ckey1", 00:29:17.894 "method": "bdev_nvme_set_keys", 00:29:17.894 "req_id": 1 00:29:17.894 } 00:29:17.894 Got JSON-RPC error response 00:29:17.894 response: 00:29:17.894 { 00:29:17.894 "code": -13, 00:29:17.894 "message": "Permission denied" 00:29:17.894 } 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:29:17.894 13:33:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:18.837 rmmod nvme_tcp 00:29:18.837 rmmod nvme_fabrics 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1096629 ']' 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1096629 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1096629 ']' 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1096629 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:18.837 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1096629 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1096629' 00:29:19.098 killing process with pid 1096629 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1096629 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1096629 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:19.098 13:33:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:29:21.645 13:33:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:24.949 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:29:24.949 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:29:25.210 13:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.yn0 /tmp/spdk.key-null.BIh /tmp/spdk.key-sha256.frT /tmp/spdk.key-sha384.Bdv /tmp/spdk.key-sha512.BKw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:25.210 13:33:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:28.514 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:28.514 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:28.514 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:28.775 00:29:28.775 real 1m3.988s 00:29:28.775 user 0m56.650s 00:29:28.775 sys 0m17.026s 00:29:28.775 13:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.775 13:33:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.775 ************************************ 00:29:28.775 END TEST nvmf_auth_host 00:29:28.775 ************************************ 00:29:28.775 13:33:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:28.775 13:33:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:28.776 13:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.776 13:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.776 13:33:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.776 ************************************ 00:29:28.776 START TEST nvmf_digest 00:29:28.776 ************************************ 00:29:28.776 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:28.776 * Looking for test storage... 00:29:28.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.776 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:28.776 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:28.776 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.037 --rc genhtml_branch_coverage=1 00:29:29.037 --rc genhtml_function_coverage=1 00:29:29.037 --rc genhtml_legend=1 00:29:29.037 --rc geninfo_all_blocks=1 00:29:29.037 --rc geninfo_unexecuted_blocks=1 00:29:29.037 00:29:29.037 ' 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.037 --rc genhtml_branch_coverage=1 00:29:29.037 --rc genhtml_function_coverage=1 00:29:29.037 --rc genhtml_legend=1 00:29:29.037 --rc geninfo_all_blocks=1 00:29:29.037 --rc geninfo_unexecuted_blocks=1 00:29:29.037 00:29:29.037 ' 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.037 --rc genhtml_branch_coverage=1 00:29:29.037 --rc genhtml_function_coverage=1 00:29:29.037 --rc genhtml_legend=1 00:29:29.037 --rc geninfo_all_blocks=1 00:29:29.037 --rc geninfo_unexecuted_blocks=1 00:29:29.037 00:29:29.037 ' 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.037 --rc genhtml_branch_coverage=1 00:29:29.037 --rc genhtml_function_coverage=1 00:29:29.037 --rc genhtml_legend=1 00:29:29.037 --rc geninfo_all_blocks=1 00:29:29.037 --rc geninfo_unexecuted_blocks=1 00:29:29.037 00:29:29.037 ' 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.037 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:29:29.038 13:33:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:37.195 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:37.195 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:37.195 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:37.196 Found net devices under 0000:31:00.0: cvl_0_0 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:37.196 Found net devices under 0000:31:00.1: cvl_0_1 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:37.196 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:37.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:37.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:29:37.456 00:29:37.456 --- 10.0.0.2 ping statistics --- 00:29:37.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.456 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:37.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:37.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:29:37.456 00:29:37.456 --- 10.0.0.1 ping statistics --- 00:29:37.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:37.456 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:37.456 ************************************ 00:29:37.456 START TEST nvmf_digest_clean 00:29:37.456 ************************************ 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1115506 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1115506 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1115506 ']' 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:37.456 13:33:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:37.456 [2024-12-05 13:33:59.945036] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:37.456 [2024-12-05 13:33:59.945101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.717 [2024-12-05 13:34:00.036900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.717 [2024-12-05 13:34:00.080853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.717 [2024-12-05 13:34:00.080897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.717 [2024-12-05 13:34:00.080905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.717 [2024-12-05 13:34:00.080912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.717 [2024-12-05 13:34:00.080918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.717 [2024-12-05 13:34:00.081512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.286 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:38.548 null0 00:29:38.548 [2024-12-05 13:34:00.862977] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.548 [2024-12-05 13:34:00.887196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1115583 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1115583 /var/tmp/bperf.sock 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1115583 ']' 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.548 13:34:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:38.548 [2024-12-05 13:34:00.945265] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:38.548 [2024-12-05 13:34:00.945325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1115583 ] 00:29:38.548 [2024-12-05 13:34:01.049310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.548 [2024-12-05 13:34:01.085428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.489 13:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.489 13:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:39.489 13:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:39.489 13:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:39.489 13:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:39.489 13:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.489 13:34:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:40.059 nvme0n1 00:29:40.059 13:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:40.060 13:34:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:40.060 Running I/O for 2 seconds... 00:29:41.942 19825.00 IOPS, 77.44 MiB/s [2024-12-05T12:34:04.510Z] 19741.50 IOPS, 77.12 MiB/s 00:29:41.942 Latency(us) 00:29:41.942 [2024-12-05T12:34:04.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.942 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:41.942 nvme0n1 : 2.01 19760.50 77.19 0.00 0.00 6469.03 3153.92 22173.01 00:29:41.942 [2024-12-05T12:34:04.510Z] =================================================================================================================== 00:29:41.942 [2024-12-05T12:34:04.510Z] Total : 19760.50 77.19 0.00 0.00 6469.03 3153.92 22173.01 00:29:41.942 { 00:29:41.942 "results": [ 00:29:41.942 { 00:29:41.942 "job": "nvme0n1", 00:29:41.942 "core_mask": "0x2", 00:29:41.942 "workload": "randread", 00:29:41.942 "status": "finished", 00:29:41.942 "queue_depth": 128, 00:29:41.942 "io_size": 4096, 00:29:41.942 "runtime": 2.005921, 00:29:41.942 "iops": 19760.49904258443, 00:29:41.942 "mibps": 77.18944938509543, 00:29:41.942 "io_failed": 0, 00:29:41.942 "io_timeout": 0, 00:29:41.942 "avg_latency_us": 6469.025491363506, 00:29:41.942 "min_latency_us": 3153.92, 00:29:41.942 "max_latency_us": 22173.013333333332 00:29:41.942 } 00:29:41.942 ], 00:29:41.942 "core_count": 1 00:29:41.942 } 00:29:41.942 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:41.942 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:41.942 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:41.942 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:41.942 | select(.opcode=="crc32c") 00:29:41.942 | "\(.module_name) \(.executed)"' 00:29:41.942 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1115583 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1115583 ']' 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1115583 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115583 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115583' 00:29:42.203 killing process with pid 1115583 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1115583 00:29:42.203 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.203 00:29:42.203 Latency(us) 00:29:42.203 [2024-12-05T12:34:04.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.203 [2024-12-05T12:34:04.771Z] =================================================================================================================== 00:29:42.203 [2024-12-05T12:34:04.771Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.203 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1115583 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1116383 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1116383 /var/tmp/bperf.sock 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1116383 ']' 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:42.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.465 13:34:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:42.465 [2024-12-05 13:34:04.874123] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:42.465 [2024-12-05 13:34:04.874183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116383 ] 00:29:42.465 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.465 Zero copy mechanism will not be used. 00:29:42.465 [2024-12-05 13:34:04.963899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.465 [2024-12-05 13:34:04.993510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.408 13:34:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.408 13:34:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:43.408 13:34:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:43.408 13:34:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:43.408 13:34:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:43.408 13:34:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.408 13:34:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:43.668 nvme0n1 00:29:43.668 13:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:43.668 13:34:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:43.929 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:43.929 Zero copy mechanism will not be used. 00:29:43.929 Running I/O for 2 seconds... 00:29:45.812 3052.00 IOPS, 381.50 MiB/s [2024-12-05T12:34:08.381Z] 3192.00 IOPS, 399.00 MiB/s 00:29:45.813 Latency(us) 00:29:45.813 [2024-12-05T12:34:08.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.813 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:45.813 nvme0n1 : 2.00 3193.70 399.21 0.00 0.00 5006.95 604.16 11523.41 00:29:45.813 [2024-12-05T12:34:08.381Z] =================================================================================================================== 00:29:45.813 [2024-12-05T12:34:08.381Z] Total : 3193.70 399.21 0.00 0.00 5006.95 604.16 11523.41 00:29:45.813 { 00:29:45.813 "results": [ 00:29:45.813 { 00:29:45.813 "job": "nvme0n1", 00:29:45.813 "core_mask": "0x2", 00:29:45.813 "workload": "randread", 00:29:45.813 "status": "finished", 00:29:45.813 "queue_depth": 16, 00:29:45.813 "io_size": 131072, 00:29:45.813 "runtime": 2.003946, 00:29:45.813 "iops": 3193.6988322040615, 00:29:45.813 "mibps": 399.2123540255077, 00:29:45.813 "io_failed": 0, 00:29:45.813 "io_timeout": 0, 00:29:45.813 "avg_latency_us": 5006.954133333334, 00:29:45.813 "min_latency_us": 604.16, 00:29:45.813 "max_latency_us": 11523.413333333334 00:29:45.813 } 00:29:45.813 ], 00:29:45.813 "core_count": 1 00:29:45.813 } 00:29:45.813 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:45.813 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:45.813 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:45.813 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:45.813 | select(.opcode=="crc32c") 00:29:45.813 | "\(.module_name) \(.executed)"' 00:29:45.813 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1116383 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1116383 ']' 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1116383 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116383 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116383' 00:29:46.073 killing process with pid 1116383 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1116383 00:29:46.073 Received shutdown signal, test time was about 2.000000 seconds 00:29:46.073 00:29:46.073 Latency(us) 00:29:46.073 [2024-12-05T12:34:08.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:46.073 [2024-12-05T12:34:08.641Z] =================================================================================================================== 00:29:46.073 [2024-12-05T12:34:08.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:46.073 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1116383 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1117204 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1117204 /var/tmp/bperf.sock 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1117204 ']' 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:46.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.334 13:34:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:46.334 [2024-12-05 13:34:08.710217] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:46.334 [2024-12-05 13:34:08.710274] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117204 ] 00:29:46.335 [2024-12-05 13:34:08.799571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.335 [2024-12-05 13:34:08.828444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.276 13:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.276 13:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:47.276 13:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:47.276 13:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:47.276 13:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:47.276 13:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.276 13:34:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:47.537 nvme0n1 00:29:47.537 13:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:47.537 13:34:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:47.798 Running I/O for 2 seconds... 00:29:49.684 21658.00 IOPS, 84.60 MiB/s [2024-12-05T12:34:12.252Z] 21756.00 IOPS, 84.98 MiB/s 00:29:49.684 Latency(us) 00:29:49.684 [2024-12-05T12:34:12.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.684 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:49.684 nvme0n1 : 2.00 21768.67 85.03 0.00 0.00 5872.58 2266.45 11304.96 00:29:49.684 [2024-12-05T12:34:12.252Z] =================================================================================================================== 00:29:49.684 [2024-12-05T12:34:12.252Z] Total : 21768.67 85.03 0.00 0.00 5872.58 2266.45 11304.96 00:29:49.684 { 00:29:49.684 "results": [ 00:29:49.684 { 00:29:49.684 "job": "nvme0n1", 00:29:49.684 "core_mask": "0x2", 00:29:49.684 "workload": "randwrite", 00:29:49.684 "status": "finished", 00:29:49.684 "queue_depth": 128, 00:29:49.684 "io_size": 4096, 00:29:49.684 "runtime": 2.004716, 00:29:49.684 "iops": 21768.669477372358, 00:29:49.684 "mibps": 85.03386514598577, 00:29:49.684 "io_failed": 0, 00:29:49.684 "io_timeout": 0, 00:29:49.684 "avg_latency_us": 5872.584153987168, 00:29:49.684 "min_latency_us": 2266.4533333333334, 00:29:49.684 "max_latency_us": 11304.96 00:29:49.684 } 00:29:49.684 ], 00:29:49.684 "core_count": 1 00:29:49.684 } 00:29:49.684 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:49.684 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:49.684 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:49.684 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:49.684 | select(.opcode=="crc32c") 00:29:49.684 | "\(.module_name) \(.executed)"' 00:29:49.684 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1117204 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1117204 ']' 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1117204 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1117204 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1117204' 00:29:49.945 killing process with pid 1117204 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1117204 00:29:49.945 Received shutdown signal, test time was about 2.000000 seconds 00:29:49.945 00:29:49.945 Latency(us) 00:29:49.945 [2024-12-05T12:34:12.513Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:49.945 [2024-12-05T12:34:12.513Z] =================================================================================================================== 00:29:49.945 [2024-12-05T12:34:12.513Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:49.945 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1117204 00:29:50.205 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:50.205 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1117955 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1117955 /var/tmp/bperf.sock 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1117955 ']' 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:50.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:50.206 13:34:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:50.206 [2024-12-05 13:34:12.592486] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:50.206 [2024-12-05 13:34:12.592544] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117955 ] 00:29:50.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:50.206 Zero copy mechanism will not be used. 00:29:50.206 [2024-12-05 13:34:12.681971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.206 [2024-12-05 13:34:12.711492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.146 13:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:51.146 13:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:51.146 13:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:51.146 13:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:51.146 13:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:51.146 13:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.147 13:34:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:51.718 nvme0n1 00:29:51.718 13:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:51.718 13:34:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:51.718 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:51.718 Zero copy mechanism will not be used. 00:29:51.718 Running I/O for 2 seconds... 00:29:53.601 4463.00 IOPS, 557.88 MiB/s [2024-12-05T12:34:16.169Z] 4383.50 IOPS, 547.94 MiB/s 00:29:53.601 Latency(us) 00:29:53.601 [2024-12-05T12:34:16.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.601 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:53.601 nvme0n1 : 2.01 4381.10 547.64 0.00 0.00 3646.56 1699.84 8683.52 00:29:53.601 [2024-12-05T12:34:16.169Z] =================================================================================================================== 00:29:53.601 [2024-12-05T12:34:16.169Z] Total : 4381.10 547.64 0.00 0.00 3646.56 1699.84 8683.52 00:29:53.601 { 00:29:53.601 "results": [ 00:29:53.601 { 00:29:53.601 "job": "nvme0n1", 00:29:53.601 "core_mask": "0x2", 00:29:53.601 "workload": "randwrite", 00:29:53.601 "status": "finished", 00:29:53.601 "queue_depth": 16, 00:29:53.601 "io_size": 131072, 00:29:53.601 "runtime": 2.005431, 00:29:53.601 "iops": 4381.1031144925955, 00:29:53.601 "mibps": 547.6378893115744, 00:29:53.601 "io_failed": 0, 00:29:53.601 "io_timeout": 0, 00:29:53.601 "avg_latency_us": 3646.5627558995375, 00:29:53.601 "min_latency_us": 1699.84, 00:29:53.601 "max_latency_us": 8683.52 00:29:53.601 } 00:29:53.601 ], 00:29:53.601 "core_count": 1 00:29:53.601 } 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:53.863 | select(.opcode=="crc32c") 00:29:53.863 | "\(.module_name) \(.executed)"' 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1117955 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1117955 ']' 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1117955 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1117955 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1117955' 00:29:53.863 killing process with pid 1117955 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1117955 00:29:53.863 Received shutdown signal, test time was about 2.000000 seconds 00:29:53.863 00:29:53.863 Latency(us) 00:29:53.863 [2024-12-05T12:34:16.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.863 [2024-12-05T12:34:16.431Z] =================================================================================================================== 00:29:53.863 [2024-12-05T12:34:16.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:53.863 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1117955 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1115506 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1115506 ']' 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1115506 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1115506 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1115506' 00:29:54.124 killing process with pid 1115506 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1115506 00:29:54.124 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1115506 00:29:54.385 00:29:54.385 real 0m16.831s 00:29:54.385 user 0m33.462s 00:29:54.385 sys 0m3.433s 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:54.385 ************************************ 00:29:54.385 END TEST nvmf_digest_clean 00:29:54.385 ************************************ 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:54.385 ************************************ 00:29:54.385 START TEST nvmf_digest_error 00:29:54.385 ************************************ 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1118669 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1118669 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1118669 ']' 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.385 13:34:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:54.385 [2024-12-05 13:34:16.855652] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:54.386 [2024-12-05 13:34:16.855710] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.386 [2024-12-05 13:34:16.946242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.694 [2024-12-05 13:34:16.986614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.694 [2024-12-05 13:34:16.986652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.694 [2024-12-05 13:34:16.986660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.694 [2024-12-05 13:34:16.986667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.694 [2024-12-05 13:34:16.986673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.694 [2024-12-05 13:34:16.987284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.278 [2024-12-05 13:34:17.693426] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.278 null0 00:29:55.278 [2024-12-05 13:34:17.776803] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.278 [2024-12-05 13:34:17.801028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1119017 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1119017 /var/tmp/bperf.sock 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1119017 ']' 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:55.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.278 13:34:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:55.538 [2024-12-05 13:34:17.858933] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:55.538 [2024-12-05 13:34:17.858983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119017 ] 00:29:55.538 [2024-12-05 13:34:17.946109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.538 [2024-12-05 13:34:17.976040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.108 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.108 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:56.108 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.108 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:56.368 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:56.368 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.368 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.368 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.368 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.368 13:34:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:56.628 nvme0n1 00:29:56.887 13:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:56.887 13:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.888 13:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:56.888 13:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.888 13:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:56.888 13:34:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:56.888 Running I/O for 2 seconds... 00:29:56.888 [2024-12-05 13:34:19.311641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.311673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.311682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.323099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.323118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.323126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.333961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.333981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.333987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.347390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.347409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.347416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.361047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.361066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.361073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.374005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.374024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.374031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.386645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.386664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.386675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.397926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.397943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.397949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.410747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.410765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.410772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.423853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.423874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.423881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.436744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.436762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.436768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:56.888 [2024-12-05 13:34:19.446897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:56.888 [2024-12-05 13:34:19.446915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.888 [2024-12-05 13:34:19.446921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.460153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.460171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.460178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.474046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.474064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.474071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.485412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.485430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.485436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.497594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.497617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.497625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.511118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.511136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.511142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.523274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.523292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.523298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.535296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.535313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.535320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.546857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.546877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.546884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.560205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.560223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.560229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.573523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.573540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.573547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.586234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.586252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.586258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.597234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.597252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.597259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.610227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.610244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.610250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.623961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.623978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.623985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.636749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.636766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.636773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.647599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.647616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.647622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.660306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.660324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.660330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.673270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.673288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.673294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.685812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.685829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.685836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.698632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.698650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.698656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.148 [2024-12-05 13:34:19.713081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.148 [2024-12-05 13:34:19.713099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.148 [2024-12-05 13:34:19.713109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.722797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.722815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.722821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.736253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.736270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.736276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.749830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.749847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.749853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.762258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.762276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.762282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.775072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.775089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.775095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.787213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.787230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.787237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.800397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.800414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:7671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.800421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.812112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.812129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.812135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.823887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.823907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.823914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.837124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.837142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.837148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.850006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.850023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.850030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.863044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.863061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.863068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.874872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.874888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.874895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.885714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.885731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.885738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.898859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.898881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.898888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.912757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.912774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.912780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.924914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.924931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.924937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.938202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.938218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.938225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.948643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.948661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.948667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.408 [2024-12-05 13:34:19.961626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.408 [2024-12-05 13:34:19.961643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.408 [2024-12-05 13:34:19.961650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:19.974949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:19.974967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:19.974974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:19.986004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:19.986021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:19.986028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:19.999497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:19.999515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:19.999521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.013017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.013036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.013043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.024185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.024202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.024209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.038524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.038542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.038552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.049627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.049645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.049652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.063404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.063422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:10515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.063428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.076542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.076559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.076565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.088337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.088355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.088361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.100882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.100899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.100906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.112850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.670 [2024-12-05 13:34:20.112870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:1958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.670 [2024-12-05 13:34:20.112876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.670 [2024-12-05 13:34:20.124954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.124971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.124978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.137941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.137959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.137965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.150497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.150515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.150522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.163597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.163615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.163622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.175686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.175703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.175709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.188935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.188952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.188958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.200850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.200871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.200878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.212284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.212301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.212308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.671 [2024-12-05 13:34:20.224652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.671 [2024-12-05 13:34:20.224670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.671 [2024-12-05 13:34:20.224676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.932 [2024-12-05 13:34:20.237868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.932 [2024-12-05 13:34:20.237886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.932 [2024-12-05 13:34:20.237892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.932 [2024-12-05 13:34:20.250593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.932 [2024-12-05 13:34:20.250611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.932 [2024-12-05 13:34:20.250621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.932 [2024-12-05 13:34:20.263117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.932 [2024-12-05 13:34:20.263133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.932 [2024-12-05 13:34:20.263140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.932 [2024-12-05 13:34:20.276072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.932 [2024-12-05 13:34:20.276088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.932 [2024-12-05 13:34:20.276095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.932 [2024-12-05 13:34:20.286788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.932 [2024-12-05 13:34:20.286805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.932 [2024-12-05 13:34:20.286812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.932 20305.00 IOPS, 79.32 MiB/s [2024-12-05T12:34:20.500Z] [2024-12-05 13:34:20.300968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.932 [2024-12-05 13:34:20.300985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.932 [2024-12-05 13:34:20.300991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.312773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.312791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.312797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.326058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.326076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.326082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.337836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.337854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.337860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.349836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.349853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.349860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.363309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.363330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.363337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.376031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.376049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.376057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.388606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.388623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.388630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.400158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.400175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.400182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.412377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.412396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.412402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.425564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.425581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.425588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.439229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.439246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.439253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.450597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.450614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.450621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.462944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.462961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.462967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.475222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.475240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.475246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:57.933 [2024-12-05 13:34:20.488011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:57.933 [2024-12-05 13:34:20.488029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.933 [2024-12-05 13:34:20.488035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.500669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.500687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.500694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.512753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.512771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.512778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.524178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.524195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.524202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.538363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.538381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.538388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.550777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.550794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.550801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.562456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.562473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.562480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.575704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.575722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.575732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.589292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.589310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.589316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.602618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.602636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.602643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.614552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.614570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.614577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.627497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.627515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.627521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.639257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.639275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.639282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.649879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.649897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.649904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.664103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.664121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.664128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.676372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.676390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.676397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.688361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.688382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.688389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.702368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.194 [2024-12-05 13:34:20.702385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.194 [2024-12-05 13:34:20.702392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.194 [2024-12-05 13:34:20.713841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.195 [2024-12-05 13:34:20.713858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.195 [2024-12-05 13:34:20.713870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.195 [2024-12-05 13:34:20.727089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.195 [2024-12-05 13:34:20.727107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.195 [2024-12-05 13:34:20.727114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.195 [2024-12-05 13:34:20.739914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.195 [2024-12-05 13:34:20.739933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.195 [2024-12-05 13:34:20.739939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.195 [2024-12-05 13:34:20.752467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.195 [2024-12-05 13:34:20.752484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.195 [2024-12-05 13:34:20.752491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.455 [2024-12-05 13:34:20.765259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.455 [2024-12-05 13:34:20.765276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:4262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.455 [2024-12-05 13:34:20.765283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.455 [2024-12-05 13:34:20.776161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.455 [2024-12-05 13:34:20.776179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.455 [2024-12-05 13:34:20.776186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.455 [2024-12-05 13:34:20.792023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.455 [2024-12-05 13:34:20.792041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.455 [2024-12-05 13:34:20.792047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.455 [2024-12-05 13:34:20.806625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.455 [2024-12-05 13:34:20.806642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.455 [2024-12-05 13:34:20.806649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.455 [2024-12-05 13:34:20.820786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.820803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.820810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.834348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.834366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.834372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.846439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.846458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.846464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.858342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.858359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.858366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.870335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.870353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.870359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.881546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.881564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.881570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.895885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.895903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.895910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.908610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.908631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.908638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.921744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.921762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.921768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.934255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.934273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.934280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.944985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.945002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.945009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.957228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.957246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.957253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.971299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.971317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:14933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.971323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.984997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.985015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.985021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:20.996224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:20.996242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:20.996249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:21.009921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.456 [2024-12-05 13:34:21.009939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.456 [2024-12-05 13:34:21.009945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.456 [2024-12-05 13:34:21.021126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.021144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.021154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.032505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.032522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.032529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.046631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.046650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.046656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.059914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.059932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.059939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.072200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.072218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:19789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.072225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.082977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.082995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.083002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.095158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.095176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.095182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.108701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.108719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.108725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.120985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.121002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.121015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.133197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.133214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.133221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.747 [2024-12-05 13:34:21.146247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.747 [2024-12-05 13:34:21.146264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:24025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.747 [2024-12-05 13:34:21.146270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.158625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.158641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.158648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.170926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.170943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.170950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.182299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.182317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:20844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.182323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.195742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.195759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.195765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.208738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.208755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.208762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.220387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.220404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.220411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.231912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.231932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.231939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.245601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.245618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.245625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.257225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.257242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.257249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:58.748 [2024-12-05 13:34:21.270953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:58.748 [2024-12-05 13:34:21.270970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.748 [2024-12-05 13:34:21.270977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.008 [2024-12-05 13:34:21.282899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:59.008 [2024-12-05 13:34:21.282917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.008 [2024-12-05 13:34:21.282924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.008 [2024-12-05 13:34:21.293002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x5ece80) 00:29:59.008 [2024-12-05 13:34:21.293020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:59.008 [2024-12-05 13:34:21.293027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:59.008 20286.50 IOPS, 79.24 MiB/s 00:29:59.008 Latency(us) 00:29:59.008 [2024-12-05T12:34:21.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.008 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:59.008 nvme0n1 : 2.00 20306.48 79.32 0.00 0.00 6296.87 2471.25 15837.87 00:29:59.008 [2024-12-05T12:34:21.576Z] =================================================================================================================== 00:29:59.008 [2024-12-05T12:34:21.576Z] Total : 20306.48 79.32 0.00 0.00 6296.87 2471.25 15837.87 00:29:59.008 { 00:29:59.008 "results": [ 00:29:59.008 { 00:29:59.008 "job": "nvme0n1", 00:29:59.008 "core_mask": "0x2", 00:29:59.008 "workload": "randread", 00:29:59.008 "status": "finished", 00:29:59.008 "queue_depth": 128, 00:29:59.008 "io_size": 4096, 00:29:59.008 "runtime": 2.004336, 00:29:59.008 "iops": 20306.47556098379, 00:29:59.008 "mibps": 79.32217016009292, 00:29:59.008 "io_failed": 0, 00:29:59.008 "io_timeout": 0, 00:29:59.008 "avg_latency_us": 6296.866109432201, 00:29:59.008 "min_latency_us": 2471.2533333333336, 00:29:59.008 "max_latency_us": 15837.866666666667 00:29:59.008 } 00:29:59.008 ], 00:29:59.008 "core_count": 1 00:29:59.008 } 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:59.009 | .driver_specific 00:29:59.009 | .nvme_error 00:29:59.009 | .status_code 00:29:59.009 | .command_transient_transport_error' 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1119017 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1119017 ']' 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1119017 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.009 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1119017 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1119017' 00:29:59.269 killing process with pid 1119017 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1119017 00:29:59.269 Received shutdown signal, test time was about 2.000000 seconds 00:29:59.269 00:29:59.269 Latency(us) 00:29:59.269 [2024-12-05T12:34:21.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.269 [2024-12-05T12:34:21.837Z] =================================================================================================================== 00:29:59.269 [2024-12-05T12:34:21.837Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1119017 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1119709 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1119709 /var/tmp/bperf.sock 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1119709 ']' 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:59.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:59.269 13:34:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:59.269 [2024-12-05 13:34:21.729946] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:29:59.269 [2024-12-05 13:34:21.730000] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119709 ] 00:29:59.269 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:59.269 Zero copy mechanism will not be used. 00:29:59.269 [2024-12-05 13:34:21.821697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.530 [2024-12-05 13:34:21.849605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.099 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.099 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:00.099 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:00.099 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:00.359 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:00.359 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.359 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:00.359 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.359 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.359 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:00.359 nvme0n1 00:30:00.619 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:00.619 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.619 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:00.619 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.619 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:00.619 13:34:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:00.619 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:00.619 Zero copy mechanism will not be used. 00:30:00.619 Running I/O for 2 seconds... 00:30:00.619 [2024-12-05 13:34:23.039145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.619 [2024-12-05 13:34:23.039181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.619 [2024-12-05 13:34:23.039191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.619 [2024-12-05 13:34:23.049232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.619 [2024-12-05 13:34:23.049257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.619 [2024-12-05 13:34:23.049264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.619 [2024-12-05 13:34:23.059983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.619 [2024-12-05 13:34:23.060004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.619 [2024-12-05 13:34:23.060012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.619 [2024-12-05 13:34:23.070459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.619 [2024-12-05 13:34:23.070479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.619 [2024-12-05 13:34:23.070487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.619 [2024-12-05 13:34:23.080722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.619 [2024-12-05 13:34:23.080741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.619 [2024-12-05 13:34:23.080748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.619 [2024-12-05 13:34:23.089398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.619 [2024-12-05 13:34:23.089418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.089425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.098211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.098230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.098236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.109696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.109715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.109722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.121803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.121822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.121829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.131752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.131772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.131778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.142642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.142662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.142675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.153180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.153200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.153206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.162052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.162072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.162078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.174019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.174038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.174045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.620 [2024-12-05 13:34:23.184603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.620 [2024-12-05 13:34:23.184622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.620 [2024-12-05 13:34:23.184629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.194347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.194367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.194373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.205848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.205872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.205879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.216421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.216440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.216446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.225914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.225932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.225939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.236869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.236888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.236895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.246005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.246024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.246031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.257435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.257454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.257461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.268311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.268330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.268337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.278907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.278926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.278933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.290549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.290568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.290575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.301193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.301211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.301217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.311932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.311952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.311958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.322899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.322918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.322927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.332873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.332892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.332898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.343926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.343946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.343953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.353713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.353732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.353738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.359366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.359384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.359391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.365690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.365708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.365714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.377114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.377133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.377140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.388747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.388766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.388773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.399810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.399829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.399836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.411204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.411226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.411233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:00.880 [2024-12-05 13:34:23.421241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.880 [2024-12-05 13:34:23.421260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.880 [2024-12-05 13:34:23.421266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:00.881 [2024-12-05 13:34:23.432698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.881 [2024-12-05 13:34:23.432716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.881 [2024-12-05 13:34:23.432722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:00.881 [2024-12-05 13:34:23.444129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:00.881 [2024-12-05 13:34:23.444148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:00.881 [2024-12-05 13:34:23.444155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.455151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.140 [2024-12-05 13:34:23.455170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.140 [2024-12-05 13:34:23.455176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.466476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.140 [2024-12-05 13:34:23.466494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.140 [2024-12-05 13:34:23.466501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.478260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.140 [2024-12-05 13:34:23.478279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.140 [2024-12-05 13:34:23.478285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.486706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.140 [2024-12-05 13:34:23.486725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.140 [2024-12-05 13:34:23.486731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.497162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.140 [2024-12-05 13:34:23.497180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.140 [2024-12-05 13:34:23.497187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.506484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.140 [2024-12-05 13:34:23.506502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.140 [2024-12-05 13:34:23.506509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.518521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.140 [2024-12-05 13:34:23.518540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.140 [2024-12-05 13:34:23.518546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.140 [2024-12-05 13:34:23.529282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.529301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.529307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.541162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.541180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.541187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.552388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.552408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.552414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.562699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.562718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.562724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.574112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.574131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.574137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.584190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.584209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.584216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.595843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.595866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.595876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.605035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.605054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.605061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.614064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.614084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.614090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.624214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.624234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.624240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.635993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.636011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.636018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.644556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.644575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.644582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.654284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.654303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.654310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.665227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.665246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.665253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.675909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.675927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.675934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.687278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.687297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.687304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.141 [2024-12-05 13:34:23.699611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.141 [2024-12-05 13:34:23.699630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.141 [2024-12-05 13:34:23.699637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.709498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.709520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.709527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.720837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.720857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.720869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.727542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.727561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.727568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.736742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.736761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.736768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.748170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.748189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.748196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.758733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.758752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.758758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.769322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.769340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.769349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.779217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.779236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.779242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.787988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.788007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.400 [2024-12-05 13:34:23.788013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.400 [2024-12-05 13:34:23.797653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.400 [2024-12-05 13:34:23.797672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.797678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.808653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.808672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.808678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.815681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.815700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.815707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.825970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.825989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.825995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.838106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.838124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.838130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.847109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.847128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.847135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.858945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.858967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.858974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.870440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.870460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.870466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.881048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.881067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.881073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.891259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.891279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.891285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.897438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.897457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.897463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.910195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.910214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.910220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.921852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.921876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.921882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.934009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.934029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.934036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.944801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.944820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.944827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.401 [2024-12-05 13:34:23.956859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.401 [2024-12-05 13:34:23.956884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.401 [2024-12-05 13:34:23.956890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:23.967466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:23.967486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:23.967493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:23.978241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:23.978260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:23.978266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:23.987021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:23.987040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:23.987047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:23.996120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:23.996139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:23.996146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:24.002892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:24.002912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:24.002919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:24.013106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:24.013125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:24.013132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:24.023831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:24.023851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:24.023857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.661 [2024-12-05 13:34:24.033910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.661 [2024-12-05 13:34:24.033930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.661 [2024-12-05 13:34:24.033940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.661 2991.00 IOPS, 373.88 MiB/s [2024-12-05T12:34:24.229Z] [2024-12-05 13:34:24.045854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.045878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.045885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.054907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.054926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.054932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.062471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.062491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.062498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.072094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.072113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.072120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.082251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.082270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.082277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.091484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.091503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.091510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.100101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.100120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.100127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.106870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.106890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.106896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.116919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.116942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.116948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.128887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.128906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.128913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.141787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.141806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.141813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.153288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.153308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.163012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.163031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.163037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.172142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.172161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.172168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.183330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.183349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.183355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.195781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.195801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.195807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.208012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.208031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.208041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.662 [2024-12-05 13:34:24.220994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.662 [2024-12-05 13:34:24.221014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.662 [2024-12-05 13:34:24.221020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.922 [2024-12-05 13:34:24.233686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.922 [2024-12-05 13:34:24.233706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.922 [2024-12-05 13:34:24.233712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.922 [2024-12-05 13:34:24.244799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.922 [2024-12-05 13:34:24.244818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.922 [2024-12-05 13:34:24.244825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.922 [2024-12-05 13:34:24.251081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.922 [2024-12-05 13:34:24.251100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.922 [2024-12-05 13:34:24.251106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.922 [2024-12-05 13:34:24.258678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.922 [2024-12-05 13:34:24.258698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.922 [2024-12-05 13:34:24.258704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.922 [2024-12-05 13:34:24.266172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.922 [2024-12-05 13:34:24.266191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.922 [2024-12-05 13:34:24.266198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.922 [2024-12-05 13:34:24.271999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.922 [2024-12-05 13:34:24.272018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.922 [2024-12-05 13:34:24.272025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.922 [2024-12-05 13:34:24.279433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.922 [2024-12-05 13:34:24.279452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.922 [2024-12-05 13:34:24.279458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.288503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.288525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.288531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.297256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.297275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.297282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.303038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.303057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.303064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.311002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.311022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.311028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.320603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.320623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.320630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.330812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.330832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.330839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.341835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.341854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.341861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.348469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.348488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.348494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.353795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.353814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.353822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.359334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.359353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.359360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.365358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.365377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.365383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.371240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.371259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.371266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.380193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.380211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.380218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.391031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.391050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.391057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.399637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.399656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.399663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.407300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.407319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.407326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.412912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.412932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.412938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.425345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.425364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.425377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.431012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.431031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.431038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.441322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.441341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.441348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.447373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.447393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.447399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.454240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.454259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.454266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.459437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.459456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.459463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.464765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.464784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.464791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.469967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.469986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.469993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:01.923 [2024-12-05 13:34:24.479938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:01.923 [2024-12-05 13:34:24.479957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:01.923 [2024-12-05 13:34:24.479964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.183 [2024-12-05 13:34:24.488837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.183 [2024-12-05 13:34:24.488860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.488872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.498090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.498110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.498116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.506899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.506918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.506924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.509937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.509955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.509961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.521103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.521121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.521128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.530805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.530824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.530831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.543685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.543705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.543711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.556516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.556536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.556542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.568646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.568665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.568671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.581791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.581811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.581817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.595213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.595232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.595239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.607536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.607555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.607562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.619694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.619712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.619719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.632107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.632126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.632133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.644828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.644848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.644855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.657007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.657026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.657033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.669011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.669030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.669036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.681712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.681731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.681740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.691758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.691776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.691783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.697219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.697238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.697244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.704965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.704983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.704990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.716139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.716157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.716164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.722217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.722235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.722242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.730410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.730429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.730436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.737027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.737045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.737052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.184 [2024-12-05 13:34:24.745096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.184 [2024-12-05 13:34:24.745115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.184 [2024-12-05 13:34:24.745122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.754976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.754994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.755000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.762312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.762330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.762337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.766981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.766999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.767006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.776372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.776389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.776396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.785267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.785285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.785292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.794667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.794686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.794692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.806108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.806127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.806134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.816877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.816895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.816902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.824814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.824832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.824842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.829998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.830017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.444 [2024-12-05 13:34:24.830025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.444 [2024-12-05 13:34:24.840251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.444 [2024-12-05 13:34:24.840270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.840276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.846217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.846236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.846243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.856018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.856037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.856043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.866419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.866438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.866445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.872931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.872949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.872955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.883355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.883374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.883380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.892758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.892777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.892784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.898684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.898706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.898713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.909213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.909231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.909237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.919395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.919414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.919420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.927134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.927153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.927159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.935689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.935708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.935715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.942074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.942093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.942099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.949720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.949739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.949746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.957446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.957465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.957472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.967051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.967070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.967077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.977632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.977652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.977658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.986790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.986809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.986815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:24.995918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:24.995937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:24.995944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:25.003522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:25.003540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:25.003547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.445 [2024-12-05 13:34:25.008853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.445 [2024-12-05 13:34:25.008876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.445 [2024-12-05 13:34:25.008883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:02.705 [2024-12-05 13:34:25.017850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.705 [2024-12-05 13:34:25.017874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.705 [2024-12-05 13:34:25.017881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:02.705 [2024-12-05 13:34:25.026580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.705 [2024-12-05 13:34:25.026599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.705 [2024-12-05 13:34:25.026606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:02.705 [2024-12-05 13:34:25.036220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15b35a0) 00:30:02.705 [2024-12-05 13:34:25.036238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:02.705 [2024-12-05 13:34:25.036245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:02.705 3212.50 IOPS, 401.56 MiB/s 00:30:02.705 Latency(us) 00:30:02.705 [2024-12-05T12:34:25.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.705 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:30:02.705 nvme0n1 : 2.00 3212.70 401.59 0.00 0.00 4977.14 645.12 13161.81 00:30:02.705 [2024-12-05T12:34:25.273Z] =================================================================================================================== 00:30:02.705 [2024-12-05T12:34:25.273Z] Total : 3212.70 401.59 0.00 0.00 4977.14 645.12 13161.81 00:30:02.705 { 00:30:02.705 "results": [ 00:30:02.705 { 00:30:02.705 "job": "nvme0n1", 00:30:02.705 "core_mask": "0x2", 00:30:02.705 "workload": "randread", 00:30:02.705 "status": "finished", 00:30:02.705 "queue_depth": 16, 00:30:02.705 "io_size": 131072, 00:30:02.705 "runtime": 2.004856, 00:30:02.705 "iops": 3212.699565455075, 00:30:02.705 "mibps": 401.5874456818844, 00:30:02.705 "io_failed": 0, 00:30:02.705 "io_timeout": 0, 00:30:02.705 "avg_latency_us": 4977.13787714123, 00:30:02.705 "min_latency_us": 645.12, 00:30:02.705 "max_latency_us": 13161.813333333334 00:30:02.705 } 00:30:02.705 ], 00:30:02.705 "core_count": 1 00:30:02.705 } 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:02.705 | .driver_specific 00:30:02.705 | .nvme_error 00:30:02.705 | .status_code 00:30:02.705 | .command_transient_transport_error' 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1119709 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1119709 ']' 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1119709 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:02.705 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1119709 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1119709' 00:30:02.965 killing process with pid 1119709 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1119709 00:30:02.965 Received shutdown signal, test time was about 2.000000 seconds 00:30:02.965 00:30:02.965 Latency(us) 00:30:02.965 [2024-12-05T12:34:25.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:02.965 [2024-12-05T12:34:25.533Z] =================================================================================================================== 00:30:02.965 [2024-12-05T12:34:25.533Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1119709 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1120385 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1120385 /var/tmp/bperf.sock 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1120385 ']' 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:02.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.965 13:34:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:02.965 [2024-12-05 13:34:25.471447] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:02.965 [2024-12-05 13:34:25.471505] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120385 ] 00:30:03.225 [2024-12-05 13:34:25.562407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:03.225 [2024-12-05 13:34:25.591696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.796 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:03.796 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:03.796 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:03.796 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:04.057 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:04.057 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.057 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:04.057 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.057 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.057 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:04.318 nvme0n1 00:30:04.318 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:30:04.318 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.318 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:04.318 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.318 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:04.318 13:34:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:04.318 Running I/O for 2 seconds... 00:30:04.318 [2024-12-05 13:34:26.881400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.318 [2024-12-05 13:34:26.881608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.318 [2024-12-05 13:34:26.881634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.579 [2024-12-05 13:34:26.893737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.579 [2024-12-05 13:34:26.894032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.579 [2024-12-05 13:34:26.894051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.579 [2024-12-05 13:34:26.906042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.579 [2024-12-05 13:34:26.906331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.579 [2024-12-05 13:34:26.906349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.579 [2024-12-05 13:34:26.918331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.579 [2024-12-05 13:34:26.918649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.579 [2024-12-05 13:34:26.918666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.579 [2024-12-05 13:34:26.930590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.579 [2024-12-05 13:34:26.930915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.579 [2024-12-05 13:34:26.930932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.579 [2024-12-05 13:34:26.942834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.579 [2024-12-05 13:34:26.943151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:26.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:26.955078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:26.955361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:26.955377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:26.967323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:26.967656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:26.967672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:26.979573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:26.979868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:26.979888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:26.991818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:26.992113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:26.992129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.004036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.004359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.004375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.016254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.016545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.028440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.028740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.028757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.040633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.040944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.040960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.052839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.053162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.053178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.065056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.065336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.065352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.077259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.077554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.077570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.089474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.089781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.089797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.101661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.101969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.101985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.113884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.114181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.114197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.126070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.126252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.126268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.580 [2024-12-05 13:34:27.138272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.580 [2024-12-05 13:34:27.138567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.580 [2024-12-05 13:34:27.138585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.150482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.150787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.150803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.162638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.162957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.162973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.174883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.175162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.175178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.187053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.187356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.187372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.199271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.199551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.199567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.211456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.211783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.211799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.223655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.223940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.223957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.235827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.236131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.236148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.840 [2024-12-05 13:34:27.248049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.840 [2024-12-05 13:34:27.248366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.840 [2024-12-05 13:34:27.248382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.260235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.260554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.260570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.272438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.272729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.272745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.284630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.284954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.284970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.296815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.297106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.297125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.309018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.309327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.309343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.321211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.321498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.321514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.333413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.333694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.333710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.345601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.345783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.345798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.357791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.358127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.358143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.370013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.370195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.370210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.382224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.382507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.382523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:04.841 [2024-12-05 13:34:27.394419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:04.841 [2024-12-05 13:34:27.394733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:04.841 [2024-12-05 13:34:27.394750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.406631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.406951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.406968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.418821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.419137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.419154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.431026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.431322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.431338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.443222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.443523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.443540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.455431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.455753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.455770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.467646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.467955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.467972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.479842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.480169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.480185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.492052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.492355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.492371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.504290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.504589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.504605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.516479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.516789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.516805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.528689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.528982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.528999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.540905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.541229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.541246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.553097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.553430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.565315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.565610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.565626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.577516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.577833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.577849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.589710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.590048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.590065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.601940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.602224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.602241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.614145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.614470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.103 [2024-12-05 13:34:27.614489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.103 [2024-12-05 13:34:27.626343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.103 [2024-12-05 13:34:27.626628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.104 [2024-12-05 13:34:27.626644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.104 [2024-12-05 13:34:27.638554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.104 [2024-12-05 13:34:27.638858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.104 [2024-12-05 13:34:27.638878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.104 [2024-12-05 13:34:27.650769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.104 [2024-12-05 13:34:27.651072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.104 [2024-12-05 13:34:27.651088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.104 [2024-12-05 13:34:27.663193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.104 [2024-12-05 13:34:27.663510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.104 [2024-12-05 13:34:27.663526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.365 [2024-12-05 13:34:27.675470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.365 [2024-12-05 13:34:27.675776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.365 [2024-12-05 13:34:27.675792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.365 [2024-12-05 13:34:27.687691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.365 [2024-12-05 13:34:27.688013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:6347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.365 [2024-12-05 13:34:27.688030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.365 [2024-12-05 13:34:27.699970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.365 [2024-12-05 13:34:27.700299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.365 [2024-12-05 13:34:27.700315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.365 [2024-12-05 13:34:27.712156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.365 [2024-12-05 13:34:27.712335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.365 [2024-12-05 13:34:27.712350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.365 [2024-12-05 13:34:27.724389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.724714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.724731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.736611] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.736953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.736969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.748812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.749130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.749146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.761036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.761332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.761349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.773245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.773425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.773441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.785408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.785588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.785604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.797660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.797840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.797855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.809871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.810166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.810182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.822074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.822356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.822373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.834287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.834565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.834582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.846479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.846797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.846813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.858673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.858988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.859005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 20780.00 IOPS, 81.17 MiB/s [2024-12-05T12:34:27.934Z] [2024-12-05 13:34:27.870973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.871279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.871296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.883164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.883469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.883485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.895375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.895669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.895686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.907572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.907875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.907891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.366 [2024-12-05 13:34:27.919778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.366 [2024-12-05 13:34:27.920089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.366 [2024-12-05 13:34:27.920106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:27.931989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:27.932287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:27.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:27.944179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:27.944499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:27.944516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:27.956381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:27.956690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:27.956706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:27.968609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:27.968901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:27.968918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:27.980807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:27.981128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:27.981145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:27.993019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:27.993327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:27.993345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.005210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.005537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.005553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.017412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.017711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.017727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.029628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.029912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.029929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.041827] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.042132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.042148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.054039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.054364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.054380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.066252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.066552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.066568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.078442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.078730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.090630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.090923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.090939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.102860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.103164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.103180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.115041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.115338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.115355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.127275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.127566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.127582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.139435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.139739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.139759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.151660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.151960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.151977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.163894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.164205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.164221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.176087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.176413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.176430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.628 [2024-12-05 13:34:28.188295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.628 [2024-12-05 13:34:28.188584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.628 [2024-12-05 13:34:28.188600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.200484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.200811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.200828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.212669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.212981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.212998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.224875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.225156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.225173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.237062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.237382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.237398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.249256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.249564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.249584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.261442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.261750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:8811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.261766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.273628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.273925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.273942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.285830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.286131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.286148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.298023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.298338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.298353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.310213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.310496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.310512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.322407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.322712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.322728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.334575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.334869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.334886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.346759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.347034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.347052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.358961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.359247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.359264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.371149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.371439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.371456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.383336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.383630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.383647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.395510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.395808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.395825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.407685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.408006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.408022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.419887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.420193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.420209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.432080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.432380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.890 [2024-12-05 13:34:28.432396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:05.890 [2024-12-05 13:34:28.444267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:05.890 [2024-12-05 13:34:28.444452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:05.891 [2024-12-05 13:34:28.444468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.151 [2024-12-05 13:34:28.456469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.151 [2024-12-05 13:34:28.456747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.151 [2024-12-05 13:34:28.456764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.151 [2024-12-05 13:34:28.468647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.151 [2024-12-05 13:34:28.468828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.468843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.480843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.481149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.481164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.493040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.493344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:11011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.493361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.505220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.505548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.505565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.517407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.517684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.517701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.529571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.529872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.529888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.541753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.542075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.542091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.553950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.554239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.554263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.566135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.566461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.566480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.578326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.578617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.578634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.590505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.590806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.590822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.602672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.602980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.602999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.614866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.615155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.615172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.627050] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.627331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.627347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.639205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.639490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.639507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.651417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.651699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.651716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.663735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.664051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.664068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.675974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.676256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.676272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.688143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.688428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.688445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.700434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.700616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.700632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.152 [2024-12-05 13:34:28.712622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.152 [2024-12-05 13:34:28.712925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.152 [2024-12-05 13:34:28.712942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.414 [2024-12-05 13:34:28.724812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.414 [2024-12-05 13:34:28.725130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.414 [2024-12-05 13:34:28.725147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.414 [2024-12-05 13:34:28.736979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.414 [2024-12-05 13:34:28.737299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.414 [2024-12-05 13:34:28.737316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.414 [2024-12-05 13:34:28.749229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.414 [2024-12-05 13:34:28.749520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.414 [2024-12-05 13:34:28.749536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.414 [2024-12-05 13:34:28.761420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.414 [2024-12-05 13:34:28.761708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.414 [2024-12-05 13:34:28.761725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.414 [2024-12-05 13:34:28.773646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.773943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.773963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 [2024-12-05 13:34:28.785880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.786186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.786203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 [2024-12-05 13:34:28.798088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.798420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 [2024-12-05 13:34:28.810300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.810589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.810605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 [2024-12-05 13:34:28.822484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.822792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.822808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 [2024-12-05 13:34:28.834682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.834982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.834999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 [2024-12-05 13:34:28.846885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.847187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.847203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 [2024-12-05 13:34:28.859058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.859237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.859253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 20869.00 IOPS, 81.52 MiB/s [2024-12-05T12:34:28.983Z] [2024-12-05 13:34:28.871321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb889c0) with pdu=0x200016efeb58 00:30:06.415 [2024-12-05 13:34:28.871612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:06.415 [2024-12-05 13:34:28.871627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:06.415 00:30:06.415 Latency(us) 00:30:06.415 [2024-12-05T12:34:28.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.415 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:06.415 nvme0n1 : 2.01 20869.90 81.52 0.00 0.00 6121.53 4915.20 12779.52 00:30:06.415 [2024-12-05T12:34:28.983Z] =================================================================================================================== 00:30:06.415 [2024-12-05T12:34:28.983Z] Total : 20869.90 81.52 0.00 0.00 6121.53 4915.20 12779.52 00:30:06.415 { 00:30:06.415 "results": [ 00:30:06.415 { 00:30:06.415 "job": "nvme0n1", 00:30:06.415 "core_mask": "0x2", 00:30:06.415 "workload": "randwrite", 00:30:06.415 "status": "finished", 00:30:06.415 "queue_depth": 128, 00:30:06.415 "io_size": 4096, 00:30:06.415 "runtime": 2.007197, 00:30:06.415 "iops": 20869.89966605171, 00:30:06.415 "mibps": 81.5230455705145, 00:30:06.415 "io_failed": 0, 00:30:06.415 "io_timeout": 0, 00:30:06.415 "avg_latency_us": 6121.525941911355, 00:30:06.415 "min_latency_us": 4915.2, 00:30:06.415 "max_latency_us": 12779.52 00:30:06.415 } 00:30:06.415 ], 00:30:06.415 "core_count": 1 00:30:06.415 } 00:30:06.415 13:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:06.415 13:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:06.415 13:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:06.415 | .driver_specific 00:30:06.415 | .nvme_error 00:30:06.415 | .status_code 00:30:06.415 | .command_transient_transport_error' 00:30:06.415 13:34:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1120385 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1120385 ']' 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1120385 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1120385 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1120385' 00:30:06.676 killing process with pid 1120385 00:30:06.676 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1120385 00:30:06.676 Received shutdown signal, test time was about 2.000000 seconds 00:30:06.676 00:30:06.677 Latency(us) 00:30:06.677 [2024-12-05T12:34:29.245Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:06.677 [2024-12-05T12:34:29.245Z] =================================================================================================================== 00:30:06.677 [2024-12-05T12:34:29.245Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:06.677 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1120385 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1121092 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1121092 /var/tmp/bperf.sock 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1121092 ']' 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:06.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.937 13:34:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:06.937 [2024-12-05 13:34:29.299190] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:06.937 [2024-12-05 13:34:29.299245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121092 ] 00:30:06.937 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:06.937 Zero copy mechanism will not be used. 00:30:06.937 [2024-12-05 13:34:29.390787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.937 [2024-12-05 13:34:29.419632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:07.879 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:30:08.140 nvme0n1 00:30:08.140 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:30:08.140 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.140 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:08.140 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.140 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:30:08.140 13:34:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:08.140 I/O size of 131072 is greater than zero copy threshold (65536). 00:30:08.140 Zero copy mechanism will not be used. 00:30:08.140 Running I/O for 2 seconds... 00:30:08.140 [2024-12-05 13:34:30.702810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.140 [2024-12-05 13:34:30.702908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.140 [2024-12-05 13:34:30.702935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.401 [2024-12-05 13:34:30.712171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.401 [2024-12-05 13:34:30.712411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.712431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.718770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.718872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.726187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.726258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.726275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.735974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.736030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.736046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.740655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.740913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.740932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.747929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.748210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.748228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.754897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.755154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.755172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.761296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.761573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.761590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.766742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.766799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.766816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.773216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.773276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.773294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.780282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.780338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.780355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.784654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.784726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.784743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.790857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.791123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.791142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.799363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.799437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.799453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.806007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.806085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.806101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.815652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.815726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.815750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.820937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.821195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.821214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.825903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.825986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.826002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.830732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.830868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.830886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.839182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.839411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.839428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.844429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.844496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.844512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.849019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.849088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.849104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.856406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.856469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.856485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.860632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.860689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.860705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.865818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.865898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.865916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.870996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.871062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.871079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.875187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.402 [2024-12-05 13:34:30.875276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.402 [2024-12-05 13:34:30.875291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.402 [2024-12-05 13:34:30.879595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.879669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.879685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.885298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.885353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.885370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.889639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.889706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.889723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.893723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.893782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.893798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.898622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.898679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.898696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.904314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.904372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.904388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.910159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.910233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.910249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.914516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.914572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.914588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.918844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.918921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.918938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.926441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.926504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.926520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.930857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.930930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.930946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.937000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.937086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.937102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.941245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.941300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.941316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.947242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.947513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.947532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.953077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.953133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.953152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.959502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.959566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.959582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.403 [2024-12-05 13:34:30.965430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.403 [2024-12-05 13:34:30.965634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.403 [2024-12-05 13:34:30.965651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.665 [2024-12-05 13:34:30.971878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.665 [2024-12-05 13:34:30.972110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.665 [2024-12-05 13:34:30.972126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.665 [2024-12-05 13:34:30.979234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.665 [2024-12-05 13:34:30.979302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.665 [2024-12-05 13:34:30.979318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.665 [2024-12-05 13:34:30.984308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.665 [2024-12-05 13:34:30.984365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.665 [2024-12-05 13:34:30.984381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.665 [2024-12-05 13:34:30.988500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:30.988565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:30.988581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:30.992608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:30.992672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:30.992688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:30.996706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:30.996802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:30.996817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.000914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.000996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.001015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.006690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.006783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.006798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.010921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.011207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.011225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.017488] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.017559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.017575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.021798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.021867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.021883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.025834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.025930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.025947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.032035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.032286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.032303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.037968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.038032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.038048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.042839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.042923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.042938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.048203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.048266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.048282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.052723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.052776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.052795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.060268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.060343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.060359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.068333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.068588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.068605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.073490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.073557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.073573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.080367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.080787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.080804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.089221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.089427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.089442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.098344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.098548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.098565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.666 [2024-12-05 13:34:31.105807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.666 [2024-12-05 13:34:31.105884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.666 [2024-12-05 13:34:31.105900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.111307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.111384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.111399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.117737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.117803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.117818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.127000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.127218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.127235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.133569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.133626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.133641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.139084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.139149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.139166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.147657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.147718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.147733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.155808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.155875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.155892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.160188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.160255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.160271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.164384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.164683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.164704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.172057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.172119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.172135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.176463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.176532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.176548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.180826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.180900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.180917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.185714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.185783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.185801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.192646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.192722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.192738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.198748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.198807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.198823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.206787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.206866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.206883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.212222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.212283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.212299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.216563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.216626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.216642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.220895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.221159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.221175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.667 [2024-12-05 13:34:31.227991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.667 [2024-12-05 13:34:31.228059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.667 [2024-12-05 13:34:31.228075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.232437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.232508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.232524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.236794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.236855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.236876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.241303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.241372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.241388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.245757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.245811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.245827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.249953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.250022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.250038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.253975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.254036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.254052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.257710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.257852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.257874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.930 [2024-12-05 13:34:31.263672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.930 [2024-12-05 13:34:31.264004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.930 [2024-12-05 13:34:31.264023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.272075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.272385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.272403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.279895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.280136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.280151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.287482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.287752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.287770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.294081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.294272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.294289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.298230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.298421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.298437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.302428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.302615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.302632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.309020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.309260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.309280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.316929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.317114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.317131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.321171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.321362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.321380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.325207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.325398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.325414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.329404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.329590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.329607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.337012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.337362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.337380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.341702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.341897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.341914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.347698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.347916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.355128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.355316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.355332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.359253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.359432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.359449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.363200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.363376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.363393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.367376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.367554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.367571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.373983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.374213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.374230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.382046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.382225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.382241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.386332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.386512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.386528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.391920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.392126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.392144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.399419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.399689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.399707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.405184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.405353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.405369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.409226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.409397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.409414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.413455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.413626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.413643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.419803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.931 [2024-12-05 13:34:31.420020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.931 [2024-12-05 13:34:31.420037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.931 [2024-12-05 13:34:31.426918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.427159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.427175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.431131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.431282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.431299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.435357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.435526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.435543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.439564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.439754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.439771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.443785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.443957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.443973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.448089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.448277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.448297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.452037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.452212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.452228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.455761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.455943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.455964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.461011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.461121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.461137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.465768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.465944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.465961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.469945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.470161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.470177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.474159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.474331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.474349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.478290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.478465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.478482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.483167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.483373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.483390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:08.932 [2024-12-05 13:34:31.490195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:08.932 [2024-12-05 13:34:31.490509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:08.932 [2024-12-05 13:34:31.490528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.496031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.496210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.496226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.500659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.500909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.500926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.507625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.507886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.507904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.515808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.515990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.516006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.522813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.522968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.522985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.528689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.528857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.528880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.536718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.536898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.536914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.543247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.543424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.194 [2024-12-05 13:34:31.543441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.194 [2024-12-05 13:34:31.548328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.194 [2024-12-05 13:34:31.548646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.548664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.554112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.554332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.554349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.563362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.563632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.563648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.571633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.571908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.571926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.578300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.578543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.578560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.585248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.585565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.585583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.592706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.592963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.592979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.599969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.600135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.600151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.603940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.604116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.604136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.608035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.608200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.608217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.611974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.612159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.612175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.616295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.616468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.616485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.621575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.621743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.621760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.630366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.630620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.630638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.638823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.639088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.639106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.647506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.647734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.647751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.653790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.653881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.653897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.662495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.662724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.662741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.669098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.669244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.669260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.675081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.675362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.680368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.680589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.680605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.687695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.687955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.687972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.694819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.694993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.695010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.195 5247.00 IOPS, 655.88 MiB/s [2024-12-05T12:34:31.763Z] [2024-12-05 13:34:31.703802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.704053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.704070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.711919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.712168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.712184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.720792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.721072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.721090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.731826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.732131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.195 [2024-12-05 13:34:31.732149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.195 [2024-12-05 13:34:31.743113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.195 [2024-12-05 13:34:31.743371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.196 [2024-12-05 13:34:31.743388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.196 [2024-12-05 13:34:31.751143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.196 [2024-12-05 13:34:31.751311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.196 [2024-12-05 13:34:31.751327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.761173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.761311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.761328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.769381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.769628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.769644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.779752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.779991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.780008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.790475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.790718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.790734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.801390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.801719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.801739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.812439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.812705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.812735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.823622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.823838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.823854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.834094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.834339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.834355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.845004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.845281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.845299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.854238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.854421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.854439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.860123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.860293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.860310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.864705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.864891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.864908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.868932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.869103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.869122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.872894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.873066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.873083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.878831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.879014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.879030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.883027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.883206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.883226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.886823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.887005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.887022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.890966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.891122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.891142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.894741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.894921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.894941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.900699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.900994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.901010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.908999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.909301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.909319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.915974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.916216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.916233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.922773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.923055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.923073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.928069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.928249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.928265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.932155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.932328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.458 [2024-12-05 13:34:31.932344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.458 [2024-12-05 13:34:31.936185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.458 [2024-12-05 13:34:31.936360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.936377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.940317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.940492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.940509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.945249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.945562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.945579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.951253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.951427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.951444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.955046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.955217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.955233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.962616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.962917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.962933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.969894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.970113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.970135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.977127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.977276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.977292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.982955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.983125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.983141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.989775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.989961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.989977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:31.995833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:31.996081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:31.996100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:32.003921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:32.004184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:32.004202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:32.011060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:32.011379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:32.011397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.459 [2024-12-05 13:34:32.018916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.459 [2024-12-05 13:34:32.019090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.459 [2024-12-05 13:34:32.019107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.024693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.024874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.024891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.033470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.033762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.033780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.041499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.041630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.041646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.047000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.047174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.047191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.054501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.054690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.054707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.062419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.062682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.062700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.068188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.068337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.068353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.074358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.074572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.074592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.081957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.082143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.082160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.090999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.091295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.091313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.099765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.099948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.099965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.105222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.105456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.105472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.113953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.114214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.114231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.120595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.120752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.120768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.129716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.130041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.130058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.138504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.138823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.138840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.146449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.146738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.146757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.153874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.154107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.154123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.160544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.160707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.160726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.721 [2024-12-05 13:34:32.164717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.721 [2024-12-05 13:34:32.164890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.721 [2024-12-05 13:34:32.164908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.168965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.169141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.169157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.173371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.173609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.173625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.178592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.178840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.178858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.184756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.184988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.185005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.191250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.191421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.191441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.198817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.199008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.199025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.202805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.202990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.203007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.207102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.207274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.207294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.211391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.211555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.211572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.220807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.221141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.221160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.229857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.230033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.230050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.233819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.234000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.234017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.237754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.237936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.237953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.242000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.242178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.242195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.246119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.246299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.246315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.250250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.250420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.250437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.257445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.257757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.257774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.263165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.263363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.263380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.269338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.269513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.269529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.273507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.273685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.273701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.278614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.278792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.278808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.722 [2024-12-05 13:34:32.282629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.722 [2024-12-05 13:34:32.282802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.722 [2024-12-05 13:34:32.282823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.287262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.287436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.984 [2024-12-05 13:34:32.287455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.291947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.292113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.984 [2024-12-05 13:34:32.292130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.298745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.298926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.984 [2024-12-05 13:34:32.298943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.303074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.303250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.984 [2024-12-05 13:34:32.303267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.306854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.307034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.984 [2024-12-05 13:34:32.307051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.312948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.313174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.984 [2024-12-05 13:34:32.313190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.320681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.320997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.984 [2024-12-05 13:34:32.321015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.984 [2024-12-05 13:34:32.330393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.984 [2024-12-05 13:34:32.330568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.330584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.338119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.338288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.338304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.342987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.343163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.343179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.349290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.349439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.349455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.355737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.355916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.355935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.363123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.363300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.363317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.371162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.371344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.371361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.380565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.380802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.380828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.390577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.390881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.390898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.401828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.401966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.401982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.411825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.412172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.412190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.422123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.422389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.422406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.432992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.433253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.433270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.444277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.444540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.444558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.452115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.452290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.452307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.458353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.458547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.458564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.464346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.464532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.464548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.470522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.470685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.470702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.475216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.475398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.475415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.479548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.479715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.479732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.483579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.483751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.483769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.491091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.491263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.491280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.498737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.499023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.499041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.506114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.506416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.506433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.512303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.512481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.512497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.516255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.516427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.516444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.520081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.520254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.520270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.985 [2024-12-05 13:34:32.523925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.985 [2024-12-05 13:34:32.524098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.985 [2024-12-05 13:34:32.524116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.986 [2024-12-05 13:34:32.527850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.986 [2024-12-05 13:34:32.528036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.986 [2024-12-05 13:34:32.528052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:09.986 [2024-12-05 13:34:32.532322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.986 [2024-12-05 13:34:32.532511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.986 [2024-12-05 13:34:32.532527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:09.986 [2024-12-05 13:34:32.538042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.986 [2024-12-05 13:34:32.538224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.986 [2024-12-05 13:34:32.538244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:09.986 [2024-12-05 13:34:32.542385] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.986 [2024-12-05 13:34:32.542559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.986 [2024-12-05 13:34:32.542575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:09.986 [2024-12-05 13:34:32.548648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:09.986 [2024-12-05 13:34:32.548801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:09.986 [2024-12-05 13:34:32.548817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.554755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.555088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.555105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.563383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.563608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.563624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.571370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.571598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.571614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.578881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.579050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.579067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.586155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.586352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.586369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.592958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.593127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.593144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.600388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.600565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.605939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.606196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.606218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.615123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.615386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.615404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.248 [2024-12-05 13:34:32.622230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.248 [2024-12-05 13:34:32.622572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.248 [2024-12-05 13:34:32.622589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.630079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.630373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.630391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.636980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.637156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.637173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.642305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.642473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.642490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.650648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.650831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.650847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.657758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.657938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.657956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.661934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.662108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.662126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.666871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.667062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.667079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.671661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.671841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.671861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.675750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.675937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.675956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.682824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.683002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.683019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.687033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.687208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.687227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.690834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.691010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.691030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.694618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.694796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.694815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:10.249 [2024-12-05 13:34:32.698743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xb88d00) with pdu=0x200016eff3c8 00:30:10.249 [2024-12-05 13:34:32.699051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:10.249 [2024-12-05 13:34:32.699074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:10.249 4961.00 IOPS, 620.12 MiB/s 00:30:10.249 Latency(us) 00:30:10.249 [2024-12-05T12:34:32.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.249 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:30:10.249 nvme0n1 : 2.00 4963.22 620.40 0.00 0.00 3219.71 1638.40 13052.59 00:30:10.249 [2024-12-05T12:34:32.817Z] =================================================================================================================== 00:30:10.249 [2024-12-05T12:34:32.817Z] Total : 4963.22 620.40 0.00 0.00 3219.71 1638.40 13052.59 00:30:10.249 { 00:30:10.249 "results": [ 00:30:10.249 { 00:30:10.249 "job": "nvme0n1", 00:30:10.249 "core_mask": "0x2", 00:30:10.249 "workload": "randwrite", 00:30:10.249 "status": "finished", 00:30:10.249 "queue_depth": 16, 00:30:10.249 "io_size": 131072, 00:30:10.249 "runtime": 2.003136, 00:30:10.249 "iops": 4963.217674686092, 00:30:10.249 "mibps": 620.4022093357615, 00:30:10.249 "io_failed": 0, 00:30:10.249 "io_timeout": 0, 00:30:10.249 "avg_latency_us": 3219.714729430698, 00:30:10.249 "min_latency_us": 1638.4, 00:30:10.249 "max_latency_us": 13052.586666666666 00:30:10.249 } 00:30:10.249 ], 00:30:10.249 "core_count": 1 00:30:10.249 } 00:30:10.249 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:30:10.249 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:30:10.249 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:30:10.249 | .driver_specific 00:30:10.249 | .nvme_error 00:30:10.249 | .status_code 00:30:10.249 | .command_transient_transport_error' 00:30:10.249 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 321 > 0 )) 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1121092 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1121092 ']' 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1121092 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1121092 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1121092' 00:30:10.511 killing process with pid 1121092 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1121092 00:30:10.511 Received shutdown signal, test time was about 2.000000 seconds 00:30:10.511 00:30:10.511 Latency(us) 00:30:10.511 [2024-12-05T12:34:33.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.511 [2024-12-05T12:34:33.079Z] =================================================================================================================== 00:30:10.511 [2024-12-05T12:34:33.079Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:10.511 13:34:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1121092 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1118669 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1118669 ']' 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1118669 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1118669 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1118669' 00:30:10.773 killing process with pid 1118669 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1118669 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1118669 00:30:10.773 00:30:10.773 real 0m16.479s 00:30:10.773 user 0m32.588s 00:30:10.773 sys 0m3.592s 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:30:10.773 ************************************ 00:30:10.773 END TEST nvmf_digest_error 00:30:10.773 ************************************ 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.773 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.773 rmmod nvme_tcp 00:30:11.033 rmmod nvme_fabrics 00:30:11.033 rmmod nvme_keyring 00:30:11.033 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.033 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:30:11.033 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1118669 ']' 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1118669 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1118669 ']' 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1118669 00:30:11.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1118669) - No such process 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1118669 is not found' 00:30:11.034 Process with pid 1118669 is not found 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.034 13:34:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.946 13:34:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.946 00:30:12.946 real 0m44.248s 00:30:12.946 user 1m8.444s 00:30:12.946 sys 0m13.508s 00:30:12.946 13:34:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.946 13:34:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:30:12.946 ************************************ 00:30:12.946 END TEST nvmf_digest 00:30:12.946 ************************************ 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.206 ************************************ 00:30:13.206 START TEST nvmf_bdevperf 00:30:13.206 ************************************ 00:30:13.206 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:30:13.206 * Looking for test storage... 00:30:13.207 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:13.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.207 --rc genhtml_branch_coverage=1 00:30:13.207 --rc genhtml_function_coverage=1 00:30:13.207 --rc genhtml_legend=1 00:30:13.207 --rc geninfo_all_blocks=1 00:30:13.207 --rc geninfo_unexecuted_blocks=1 00:30:13.207 00:30:13.207 ' 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:13.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.207 --rc genhtml_branch_coverage=1 00:30:13.207 --rc genhtml_function_coverage=1 00:30:13.207 --rc genhtml_legend=1 00:30:13.207 --rc geninfo_all_blocks=1 00:30:13.207 --rc geninfo_unexecuted_blocks=1 00:30:13.207 00:30:13.207 ' 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:13.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.207 --rc genhtml_branch_coverage=1 00:30:13.207 --rc genhtml_function_coverage=1 00:30:13.207 --rc genhtml_legend=1 00:30:13.207 --rc geninfo_all_blocks=1 00:30:13.207 --rc geninfo_unexecuted_blocks=1 00:30:13.207 00:30:13.207 ' 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:13.207 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.207 --rc genhtml_branch_coverage=1 00:30:13.207 --rc genhtml_function_coverage=1 00:30:13.207 --rc genhtml_legend=1 00:30:13.207 --rc geninfo_all_blocks=1 00:30:13.207 --rc geninfo_unexecuted_blocks=1 00:30:13.207 00:30:13.207 ' 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.207 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:13.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:13.468 13:34:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:30:21.610 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.611 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.611 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.611 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.611 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.611 13:34:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:21.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:30:21.611 00:30:21.611 --- 10.0.0.2 ping statistics --- 00:30:21.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.611 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:30:21.611 00:30:21.611 --- 10.0.0.1 ping statistics --- 00:30:21.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.611 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:21.611 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1126726 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1126726 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1126726 ']' 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:21.872 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.873 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:21.873 13:34:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:21.873 [2024-12-05 13:34:44.264782] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:21.873 [2024-12-05 13:34:44.264850] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.873 [2024-12-05 13:34:44.376014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:21.873 [2024-12-05 13:34:44.427402] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.873 [2024-12-05 13:34:44.427456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.873 [2024-12-05 13:34:44.427465] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.873 [2024-12-05 13:34:44.427473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.873 [2024-12-05 13:34:44.427480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.873 [2024-12-05 13:34:44.429384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.873 [2024-12-05 13:34:44.429546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.873 [2024-12-05 13:34:44.429545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.815 [2024-12-05 13:34:45.126093] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.815 Malloc0 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.815 [2024-12-05 13:34:45.194032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:22.815 { 00:30:22.815 "params": { 00:30:22.815 "name": "Nvme$subsystem", 00:30:22.815 "trtype": "$TEST_TRANSPORT", 00:30:22.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:22.815 "adrfam": "ipv4", 00:30:22.815 "trsvcid": "$NVMF_PORT", 00:30:22.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:22.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:22.815 "hdgst": ${hdgst:-false}, 00:30:22.815 "ddgst": ${ddgst:-false} 00:30:22.815 }, 00:30:22.815 "method": "bdev_nvme_attach_controller" 00:30:22.815 } 00:30:22.815 EOF 00:30:22.815 )") 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:22.815 13:34:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:22.815 "params": { 00:30:22.815 "name": "Nvme1", 00:30:22.815 "trtype": "tcp", 00:30:22.815 "traddr": "10.0.0.2", 00:30:22.815 "adrfam": "ipv4", 00:30:22.815 "trsvcid": "4420", 00:30:22.815 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:22.815 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:22.815 "hdgst": false, 00:30:22.815 "ddgst": false 00:30:22.815 }, 00:30:22.815 "method": "bdev_nvme_attach_controller" 00:30:22.815 }' 00:30:22.815 [2024-12-05 13:34:45.250200] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:22.815 [2024-12-05 13:34:45.250249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126798 ] 00:30:22.815 [2024-12-05 13:34:45.328561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.815 [2024-12-05 13:34:45.364881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.075 Running I/O for 1 seconds... 00:30:24.458 9042.00 IOPS, 35.32 MiB/s 00:30:24.458 Latency(us) 00:30:24.458 [2024-12-05T12:34:47.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:24.458 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:24.458 Verification LBA range: start 0x0 length 0x4000 00:30:24.458 Nvme1n1 : 1.01 9121.36 35.63 0.00 0.00 13973.03 1481.39 15073.28 00:30:24.458 [2024-12-05T12:34:47.026Z] =================================================================================================================== 00:30:24.458 [2024-12-05T12:34:47.026Z] Total : 9121.36 35.63 0.00 0.00 13973.03 1481.39 15073.28 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1127136 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:24.458 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:24.458 { 00:30:24.458 "params": { 00:30:24.458 "name": "Nvme$subsystem", 00:30:24.458 "trtype": "$TEST_TRANSPORT", 00:30:24.459 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:24.459 "adrfam": "ipv4", 00:30:24.459 "trsvcid": "$NVMF_PORT", 00:30:24.459 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:24.459 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:24.459 "hdgst": ${hdgst:-false}, 00:30:24.459 "ddgst": ${ddgst:-false} 00:30:24.459 }, 00:30:24.459 "method": "bdev_nvme_attach_controller" 00:30:24.459 } 00:30:24.459 EOF 00:30:24.459 )") 00:30:24.459 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:30:24.459 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:30:24.459 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:30:24.459 13:34:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:24.459 "params": { 00:30:24.459 "name": "Nvme1", 00:30:24.459 "trtype": "tcp", 00:30:24.459 "traddr": "10.0.0.2", 00:30:24.459 "adrfam": "ipv4", 00:30:24.459 "trsvcid": "4420", 00:30:24.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:24.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:24.459 "hdgst": false, 00:30:24.459 "ddgst": false 00:30:24.459 }, 00:30:24.459 "method": "bdev_nvme_attach_controller" 00:30:24.459 }' 00:30:24.459 [2024-12-05 13:34:46.771247] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:24.459 [2024-12-05 13:34:46.771303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127136 ] 00:30:24.459 [2024-12-05 13:34:46.849170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.459 [2024-12-05 13:34:46.884138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.719 Running I/O for 15 seconds... 00:30:26.600 10973.00 IOPS, 42.86 MiB/s [2024-12-05T12:34:49.742Z] 11173.00 IOPS, 43.64 MiB/s [2024-12-05T12:34:49.742Z] 13:34:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1126726 00:30:27.174 13:34:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:30:27.174 [2024-12-05 13:34:49.736776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.736979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.736990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.174 [2024-12-05 13:34:49.737379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.174 [2024-12-05 13:34:49.737389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.737989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.737998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.738008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.738015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.738025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.738032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.738044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.738053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.738062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.738070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.175 [2024-12-05 13:34:49.738080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.175 [2024-12-05 13:34:49.738087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.176 [2024-12-05 13:34:49.738341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.176 [2024-12-05 13:34:49.738349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.439 [2024-12-05 13:34:49.738358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.439 [2024-12-05 13:34:49.738367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.439 [2024-12-05 13:34:49.738377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.439 [2024-12-05 13:34:49.738387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.439 [2024-12-05 13:34:49.738398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.439 [2024-12-05 13:34:49.738407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.439 [2024-12-05 13:34:49.738418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.440 [2024-12-05 13:34:49.738711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.440 [2024-12-05 13:34:49.738728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.440 [2024-12-05 13:34:49.738744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.440 [2024-12-05 13:34:49.738762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.440 [2024-12-05 13:34:49.738779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.440 [2024-12-05 13:34:49.738795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:27.440 [2024-12-05 13:34:49.738812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.738987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.440 [2024-12-05 13:34:49.738995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.440 [2024-12-05 13:34:49.739005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:27.441 [2024-12-05 13:34:49.739130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.739140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119f970 is same with the state(6) to be set 00:30:27.441 [2024-12-05 13:34:49.739151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:27.441 [2024-12-05 13:34:49.739157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:27.441 [2024-12-05 13:34:49.739164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108944 len:8 PRP1 0x0 PRP2 0x0 00:30:27.441 [2024-12-05 13:34:49.739172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:27.441 [2024-12-05 13:34:49.742785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.441 [2024-12-05 13:34:49.742840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.441 [2024-12-05 13:34:49.743652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.441 [2024-12-05 13:34:49.743670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.441 [2024-12-05 13:34:49.743680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.441 [2024-12-05 13:34:49.743908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.441 [2024-12-05 13:34:49.744133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.441 [2024-12-05 13:34:49.744143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.441 [2024-12-05 13:34:49.744152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.441 [2024-12-05 13:34:49.744160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.441 [2024-12-05 13:34:49.757064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.441 [2024-12-05 13:34:49.757634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.441 [2024-12-05 13:34:49.757653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.441 [2024-12-05 13:34:49.757661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.441 [2024-12-05 13:34:49.757892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.441 [2024-12-05 13:34:49.758115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.441 [2024-12-05 13:34:49.758124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.441 [2024-12-05 13:34:49.758132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.441 [2024-12-05 13:34:49.758139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.441 [2024-12-05 13:34:49.771005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.441 [2024-12-05 13:34:49.771609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.441 [2024-12-05 13:34:49.771650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.441 [2024-12-05 13:34:49.771661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.441 [2024-12-05 13:34:49.771913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.441 [2024-12-05 13:34:49.772139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.441 [2024-12-05 13:34:49.772149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.441 [2024-12-05 13:34:49.772161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.441 [2024-12-05 13:34:49.772169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.441 [2024-12-05 13:34:49.784830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.441 [2024-12-05 13:34:49.785471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.441 [2024-12-05 13:34:49.785511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.441 [2024-12-05 13:34:49.785522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.441 [2024-12-05 13:34:49.785762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.441 [2024-12-05 13:34:49.785997] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.441 [2024-12-05 13:34:49.786007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.441 [2024-12-05 13:34:49.786015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.441 [2024-12-05 13:34:49.786023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.441 [2024-12-05 13:34:49.798662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.441 [2024-12-05 13:34:49.799196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.441 [2024-12-05 13:34:49.799235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.441 [2024-12-05 13:34:49.799246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.441 [2024-12-05 13:34:49.799487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.441 [2024-12-05 13:34:49.799712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.441 [2024-12-05 13:34:49.799722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.441 [2024-12-05 13:34:49.799730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.441 [2024-12-05 13:34:49.799738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.441 [2024-12-05 13:34:49.812594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.441 [2024-12-05 13:34:49.813246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.441 [2024-12-05 13:34:49.813285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.813296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.813536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.813762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.813771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.813780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.813788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.826457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.827171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.827210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.827221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.827461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.827686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.827697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.827705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.827713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.840362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.841051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.841090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.841101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.841342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.841567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.841577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.841584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.841592] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.854252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.854941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.854980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.854993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.855237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.855462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.855472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.855480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.855488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.868154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.868832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.868883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.868895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.869135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.869361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.869370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.869378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.869386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.882047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.882718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.882757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.882768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.883017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.883244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.883253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.883261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.883269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.895915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.896571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.896611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.896622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.896871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.897098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.897108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.897116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.897125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.909763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.910379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.910419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.910430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.910675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.442 [2024-12-05 13:34:49.910910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.442 [2024-12-05 13:34:49.910921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.442 [2024-12-05 13:34:49.910929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.442 [2024-12-05 13:34:49.910937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.442 [2024-12-05 13:34:49.923799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.442 [2024-12-05 13:34:49.924463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.442 [2024-12-05 13:34:49.924503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.442 [2024-12-05 13:34:49.924514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.442 [2024-12-05 13:34:49.924755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.443 [2024-12-05 13:34:49.924989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.443 [2024-12-05 13:34:49.925000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.443 [2024-12-05 13:34:49.925008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.443 [2024-12-05 13:34:49.925016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.443 [2024-12-05 13:34:49.937668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.443 [2024-12-05 13:34:49.938307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.443 [2024-12-05 13:34:49.938346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.443 [2024-12-05 13:34:49.938357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.443 [2024-12-05 13:34:49.938597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.443 [2024-12-05 13:34:49.938823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.443 [2024-12-05 13:34:49.938832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.443 [2024-12-05 13:34:49.938841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.443 [2024-12-05 13:34:49.938849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.443 [2024-12-05 13:34:49.951503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.443 [2024-12-05 13:34:49.951983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.443 [2024-12-05 13:34:49.952023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.443 [2024-12-05 13:34:49.952036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.443 [2024-12-05 13:34:49.952278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.443 [2024-12-05 13:34:49.952503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.443 [2024-12-05 13:34:49.952514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.443 [2024-12-05 13:34:49.952526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.443 [2024-12-05 13:34:49.952535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.443 [2024-12-05 13:34:49.965411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.443 [2024-12-05 13:34:49.965975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.443 [2024-12-05 13:34:49.966014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.443 [2024-12-05 13:34:49.966027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.443 [2024-12-05 13:34:49.966271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.443 [2024-12-05 13:34:49.966497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.443 [2024-12-05 13:34:49.966506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.443 [2024-12-05 13:34:49.966514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.443 [2024-12-05 13:34:49.966522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.443 [2024-12-05 13:34:49.979397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.443 [2024-12-05 13:34:49.979948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.443 [2024-12-05 13:34:49.979988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.443 [2024-12-05 13:34:49.980000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.443 [2024-12-05 13:34:49.980244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.443 [2024-12-05 13:34:49.980470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.443 [2024-12-05 13:34:49.980480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.443 [2024-12-05 13:34:49.980488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.443 [2024-12-05 13:34:49.980496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.443 [2024-12-05 13:34:49.993360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.443 [2024-12-05 13:34:49.993963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.443 [2024-12-05 13:34:49.994002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.443 [2024-12-05 13:34:49.994016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.443 [2024-12-05 13:34:49.994260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.443 [2024-12-05 13:34:49.994486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.443 [2024-12-05 13:34:49.994496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.443 [2024-12-05 13:34:49.994504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.443 [2024-12-05 13:34:49.994512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.704 [2024-12-05 13:34:50.007273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.704 [2024-12-05 13:34:50.007990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.704 [2024-12-05 13:34:50.008030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.704 [2024-12-05 13:34:50.008043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.704 [2024-12-05 13:34:50.008289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.704 [2024-12-05 13:34:50.008514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.704 [2024-12-05 13:34:50.008524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.704 [2024-12-05 13:34:50.008534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.704 [2024-12-05 13:34:50.008543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.704 [2024-12-05 13:34:50.021227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.704 [2024-12-05 13:34:50.021647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.704 [2024-12-05 13:34:50.021667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.704 [2024-12-05 13:34:50.021677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.704 [2024-12-05 13:34:50.021906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.704 [2024-12-05 13:34:50.022130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.704 [2024-12-05 13:34:50.022141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.704 [2024-12-05 13:34:50.022150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.704 [2024-12-05 13:34:50.022157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.704 [2024-12-05 13:34:50.035233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.704 [2024-12-05 13:34:50.035945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.704 [2024-12-05 13:34:50.035984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.704 [2024-12-05 13:34:50.035996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.704 [2024-12-05 13:34:50.036237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.704 [2024-12-05 13:34:50.036462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.704 [2024-12-05 13:34:50.036472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.704 [2024-12-05 13:34:50.036480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.704 [2024-12-05 13:34:50.036488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.704 9960.67 IOPS, 38.91 MiB/s [2024-12-05T12:34:50.272Z] [2024-12-05 13:34:50.050836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.704 [2024-12-05 13:34:50.051498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.704 [2024-12-05 13:34:50.051543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.704 [2024-12-05 13:34:50.051554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.704 [2024-12-05 13:34:50.051795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.704 [2024-12-05 13:34:50.052030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.704 [2024-12-05 13:34:50.052041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.704 [2024-12-05 13:34:50.052050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.704 [2024-12-05 13:34:50.052058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.704 [2024-12-05 13:34:50.064743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.704 [2024-12-05 13:34:50.065435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.704 [2024-12-05 13:34:50.065474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.704 [2024-12-05 13:34:50.065486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.704 [2024-12-05 13:34:50.065726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.704 [2024-12-05 13:34:50.065958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.704 [2024-12-05 13:34:50.065970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.704 [2024-12-05 13:34:50.065978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.704 [2024-12-05 13:34:50.065986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.704 [2024-12-05 13:34:50.078636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.704 [2024-12-05 13:34:50.079219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.704 [2024-12-05 13:34:50.079239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.704 [2024-12-05 13:34:50.079248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.704 [2024-12-05 13:34:50.079469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.704 [2024-12-05 13:34:50.079691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.704 [2024-12-05 13:34:50.079701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.079708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.079715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.092581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.093130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.093148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.093157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.093383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.093605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.093615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.093622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.093629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.106490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.107194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.107233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.107244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.107485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.107710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.107720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.107728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.107736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.120389] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.121107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.121146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.121158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.121398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.121624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.121634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.121641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.121650] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.134311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.134847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.134893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.134906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.135148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.135374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.135388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.135396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.135404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.148264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.148946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.148985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.148996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.149237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.149462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.149472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.149480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.149488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.162165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.162837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.162884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.162897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.163139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.163364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.163374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.163382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.163390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.176042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.176669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.176708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.176719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.176979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.177205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.177215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.177223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.177231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.189888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.190566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.190604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.190616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.190857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.191093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.191104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.191112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.191120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.203773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.204457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.204496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.204508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.204750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.204986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.705 [2024-12-05 13:34:50.204998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.705 [2024-12-05 13:34:50.205006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.705 [2024-12-05 13:34:50.205014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.705 [2024-12-05 13:34:50.217658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.705 [2024-12-05 13:34:50.218316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.705 [2024-12-05 13:34:50.218355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.705 [2024-12-05 13:34:50.218368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.705 [2024-12-05 13:34:50.218610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.705 [2024-12-05 13:34:50.218836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.706 [2024-12-05 13:34:50.218846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.706 [2024-12-05 13:34:50.218854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.706 [2024-12-05 13:34:50.218873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.706 [2024-12-05 13:34:50.231525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.706 [2024-12-05 13:34:50.232116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.706 [2024-12-05 13:34:50.232160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.706 [2024-12-05 13:34:50.232173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.706 [2024-12-05 13:34:50.232415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.706 [2024-12-05 13:34:50.232640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.706 [2024-12-05 13:34:50.232650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.706 [2024-12-05 13:34:50.232658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.706 [2024-12-05 13:34:50.232667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.706 [2024-12-05 13:34:50.245533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.706 [2024-12-05 13:34:50.246180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.706 [2024-12-05 13:34:50.246220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.706 [2024-12-05 13:34:50.246232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.706 [2024-12-05 13:34:50.246473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.706 [2024-12-05 13:34:50.246699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.706 [2024-12-05 13:34:50.246710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.706 [2024-12-05 13:34:50.246719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.706 [2024-12-05 13:34:50.246728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.706 [2024-12-05 13:34:50.259392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.706 [2024-12-05 13:34:50.259985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.706 [2024-12-05 13:34:50.260025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.706 [2024-12-05 13:34:50.260036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.706 [2024-12-05 13:34:50.260276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.706 [2024-12-05 13:34:50.260501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.706 [2024-12-05 13:34:50.260511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.706 [2024-12-05 13:34:50.260519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.706 [2024-12-05 13:34:50.260527] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.967 [2024-12-05 13:34:50.273387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.967 [2024-12-05 13:34:50.273937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.967 [2024-12-05 13:34:50.273957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.967 [2024-12-05 13:34:50.273965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.967 [2024-12-05 13:34:50.274191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.967 [2024-12-05 13:34:50.274412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.967 [2024-12-05 13:34:50.274422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.967 [2024-12-05 13:34:50.274430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.967 [2024-12-05 13:34:50.274436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.967 [2024-12-05 13:34:50.287303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.967 [2024-12-05 13:34:50.287973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.288012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.288024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.288266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.288491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.288501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.288509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.288517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.301175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.301638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.301658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.301667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.301895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.302118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.302127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.302135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.302143] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.314998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.315634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.315673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.315685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.315935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.316162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.316176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.316184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.316193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.328845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.329530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.329569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.329580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.329821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.330057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.330069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.330077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.330085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.342734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.343402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.343441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.343452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.343692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.343928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.343939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.343948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.343956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.356604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.357188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.357209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.357218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.357439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.357660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.357670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.357677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.357684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.370550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.371118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.371158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.371171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.371413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.371638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.371648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.371656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.371664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.384548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.385192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.385231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.385242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.385483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.385709] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.385719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.385727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.385735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.398391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.398971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.399011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.399024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.399268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.399493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.399504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.399512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.399520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.968 [2024-12-05 13:34:50.412392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.968 [2024-12-05 13:34:50.412949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.968 [2024-12-05 13:34:50.412993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.968 [2024-12-05 13:34:50.413006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.968 [2024-12-05 13:34:50.413247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.968 [2024-12-05 13:34:50.413473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.968 [2024-12-05 13:34:50.413483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.968 [2024-12-05 13:34:50.413491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.968 [2024-12-05 13:34:50.413499] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.426365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.426954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.426976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.426986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.427208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.427429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.427439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.427447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.427454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.440304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.440880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.440898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.440906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.441126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.441347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.441356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.441363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.441370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.454242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.454894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.454934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.454945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.455190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.455416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.455426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.455433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.455442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.468132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.468787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.468825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.468839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.469090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.469317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.469327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.469335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.469344] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.482024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.482558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.482578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.482587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.482808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.483037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.483048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.483055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.483062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.495931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.496573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.496613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.496625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.496875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.497102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.497121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.497130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.497139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.509800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.510424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.510463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.510475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.510715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.510950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.510962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.510970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.510978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:27.969 [2024-12-05 13:34:50.523827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:27.969 [2024-12-05 13:34:50.524434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.969 [2024-12-05 13:34:50.524474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:27.969 [2024-12-05 13:34:50.524486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:27.969 [2024-12-05 13:34:50.524726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:27.969 [2024-12-05 13:34:50.524963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:27.969 [2024-12-05 13:34:50.524977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:27.969 [2024-12-05 13:34:50.524985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:27.969 [2024-12-05 13:34:50.524993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.231 [2024-12-05 13:34:50.537664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.231 [2024-12-05 13:34:50.538335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-12-05 13:34:50.538374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.231 [2024-12-05 13:34:50.538387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.231 [2024-12-05 13:34:50.538628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.231 [2024-12-05 13:34:50.538853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.231 [2024-12-05 13:34:50.538874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.231 [2024-12-05 13:34:50.538883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.231 [2024-12-05 13:34:50.538891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.231 [2024-12-05 13:34:50.551566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.231 [2024-12-05 13:34:50.552119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.231 [2024-12-05 13:34:50.552140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.231 [2024-12-05 13:34:50.552148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.231 [2024-12-05 13:34:50.552370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.231 [2024-12-05 13:34:50.552591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.231 [2024-12-05 13:34:50.552601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.552608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.552615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.565491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.566128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.566167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.566179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.566419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.566644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.566654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.566662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.566671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.579347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.579973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.580012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.580025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.580267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.580493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.580503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.580511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.580519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.593185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.593741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.593766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.593776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.594004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.594226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.594236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.594243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.594250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.607118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.607781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.607820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.607833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.608085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.608312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.608323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.608332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.608340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.621004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.621575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.621595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.621604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.621824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.622054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.622064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.622071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.622078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.634942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.635520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.635558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.635569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.635815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.636055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.636068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.636076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.636084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.648961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.649506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.649528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.649536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.649757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.649990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.650001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.650008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.650015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.662910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.663584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.663624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.663635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.663885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.664112] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.664122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.664130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.664138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.676830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.677502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.677542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.677554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.677797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.678032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.678043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.232 [2024-12-05 13:34:50.678056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.232 [2024-12-05 13:34:50.678064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.232 [2024-12-05 13:34:50.690744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.232 [2024-12-05 13:34:50.691335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.232 [2024-12-05 13:34:50.691356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.232 [2024-12-05 13:34:50.691365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.232 [2024-12-05 13:34:50.691586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.232 [2024-12-05 13:34:50.691808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.232 [2024-12-05 13:34:50.691817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.691825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.691832] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.233 [2024-12-05 13:34:50.704711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.233 [2024-12-05 13:34:50.705370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-12-05 13:34:50.705410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.233 [2024-12-05 13:34:50.705421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.233 [2024-12-05 13:34:50.705662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.233 [2024-12-05 13:34:50.705896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.233 [2024-12-05 13:34:50.705907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.705915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.705924] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.233 [2024-12-05 13:34:50.718676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.233 [2024-12-05 13:34:50.719278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-12-05 13:34:50.719299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.233 [2024-12-05 13:34:50.719307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.233 [2024-12-05 13:34:50.719528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.233 [2024-12-05 13:34:50.719750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.233 [2024-12-05 13:34:50.719759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.719767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.719773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.233 [2024-12-05 13:34:50.732648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.233 [2024-12-05 13:34:50.733211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-12-05 13:34:50.733229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.233 [2024-12-05 13:34:50.733237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.233 [2024-12-05 13:34:50.733457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.233 [2024-12-05 13:34:50.733679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.233 [2024-12-05 13:34:50.733689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.733696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.733703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.233 [2024-12-05 13:34:50.746578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.233 [2024-12-05 13:34:50.747080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-12-05 13:34:50.747098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.233 [2024-12-05 13:34:50.747105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.233 [2024-12-05 13:34:50.747326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.233 [2024-12-05 13:34:50.747548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.233 [2024-12-05 13:34:50.747558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.747565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.747573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.233 [2024-12-05 13:34:50.760468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.233 [2024-12-05 13:34:50.761019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-12-05 13:34:50.761036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.233 [2024-12-05 13:34:50.761045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.233 [2024-12-05 13:34:50.761266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.233 [2024-12-05 13:34:50.761488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.233 [2024-12-05 13:34:50.761499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.761507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.761515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.233 [2024-12-05 13:34:50.774299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.233 [2024-12-05 13:34:50.774975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-12-05 13:34:50.775019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.233 [2024-12-05 13:34:50.775031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.233 [2024-12-05 13:34:50.775271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.233 [2024-12-05 13:34:50.775497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.233 [2024-12-05 13:34:50.775507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.775515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.775523] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.233 [2024-12-05 13:34:50.788183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.233 [2024-12-05 13:34:50.788738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.233 [2024-12-05 13:34:50.788759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.233 [2024-12-05 13:34:50.788767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.233 [2024-12-05 13:34:50.788994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.233 [2024-12-05 13:34:50.789217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.233 [2024-12-05 13:34:50.789226] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.233 [2024-12-05 13:34:50.789234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.233 [2024-12-05 13:34:50.789241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.802102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.802668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.802686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.802694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.802921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.803142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.803151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.803159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.803165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.816027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.816521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.816539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.816547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.816772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.817001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.817011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.817018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.817025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.829895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.830529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.830568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.830579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.830820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.831056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.831068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.831076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.831084] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.843756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.844304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.844325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.844333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.844555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.844776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.844785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.844792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.844799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.857673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.858305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.858344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.858356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.858596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.858822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.858833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.858845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.858853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.871541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.872104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.872125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.872133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.872356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.872578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.872587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.872594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.872602] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.885485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.886023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.886041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.886049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.886270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.886492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.886501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.886508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.886515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.899385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.899962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.899980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.496 [2024-12-05 13:34:50.899989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.496 [2024-12-05 13:34:50.900209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.496 [2024-12-05 13:34:50.900430] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.496 [2024-12-05 13:34:50.900440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.496 [2024-12-05 13:34:50.900447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.496 [2024-12-05 13:34:50.900454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.496 [2024-12-05 13:34:50.913335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.496 [2024-12-05 13:34:50.913857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.496 [2024-12-05 13:34:50.913881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:50.913889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:50.914110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:50.914331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:50.914340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:50.914347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:50.914354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:50.927225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:50.927860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:50.927907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:50.927920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:50.928161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:50.928386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:50.928396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:50.928404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:50.928412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:50.941085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:50.941666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:50.941686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:50.941694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:50.941922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:50.942145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:50.942155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:50.942162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:50.942170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:50.955044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:50.955587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:50.955609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:50.955617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:50.955838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:50.956066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:50.956076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:50.956084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:50.956091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:50.968983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:50.969415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:50.969434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:50.969443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:50.969664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:50.969891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:50.969901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:50.969909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:50.969916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:50.983016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:50.983576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:50.983593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:50.983601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:50.983822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:50.984051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:50.984061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:50.984069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:50.984076] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:50.996950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:50.997511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:50.997529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:50.997538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:50.997762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:50.997990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:50.998000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:50.998008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:50.998015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:51.010887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:51.011473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:51.011512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:51.011524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:51.011765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:51.012000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:51.012012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:51.012021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:51.012030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:51.024918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:51.025501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:51.025520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:51.025529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:51.025750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:51.025981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:51.025992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:51.025999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:51.026006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 [2024-12-05 13:34:51.038883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:51.039509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.497 [2024-12-05 13:34:51.039549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.497 [2024-12-05 13:34:51.039560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.497 [2024-12-05 13:34:51.039801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.497 [2024-12-05 13:34:51.040035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.497 [2024-12-05 13:34:51.040046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.497 [2024-12-05 13:34:51.040058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.497 [2024-12-05 13:34:51.040066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.497 7470.50 IOPS, 29.18 MiB/s [2024-12-05T12:34:51.065Z] [2024-12-05 13:34:51.054381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.497 [2024-12-05 13:34:51.054929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.498 [2024-12-05 13:34:51.054950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.498 [2024-12-05 13:34:51.054959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.498 [2024-12-05 13:34:51.055180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.498 [2024-12-05 13:34:51.055402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.498 [2024-12-05 13:34:51.055413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.498 [2024-12-05 13:34:51.055420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.498 [2024-12-05 13:34:51.055427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.068294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.068875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.068893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.068901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.069123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.069344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.069353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.069360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.069367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.082236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.082925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.082964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.082976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.083219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.083445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.083454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.083462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.083471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.096129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.096680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.096700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.096709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.096936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.097158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.097168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.097175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.097183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.110041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.110645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.110684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.110696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.110947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.111174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.111183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.111191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.111199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.124062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.124646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.124667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.124676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.124904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.125128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.125138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.125145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.125152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.138001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.138651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.138695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.138706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.138957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.139183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.139192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.139201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.139209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.151859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.152404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.152424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.152432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.152654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.152880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.152891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.152899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.152906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.165805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.166381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.166400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.166409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.760 [2024-12-05 13:34:51.166630] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.760 [2024-12-05 13:34:51.166851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.760 [2024-12-05 13:34:51.166860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.760 [2024-12-05 13:34:51.166875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.760 [2024-12-05 13:34:51.166881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.760 [2024-12-05 13:34:51.179759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.760 [2024-12-05 13:34:51.180291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.760 [2024-12-05 13:34:51.180309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.760 [2024-12-05 13:34:51.180317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.180546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.180767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.180777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.180784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.180791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.193668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.194218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.194236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.194244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.194465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.194687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.194696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.194703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.194710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.207593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.208206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.208246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.208257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.208498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.208723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.208733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.208742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.208750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.221627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.222259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.222298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.222311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.222553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.222778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.222793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.222801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.222809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.235462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.235948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.235969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.235978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.236199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.236421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.236430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.236437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.236444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.249300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.249848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.249871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.249880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.250101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.250322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.250332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.250340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.250348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.263213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.263717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.263756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.263768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.264016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.264244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.264254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.264262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.264271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.277147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.277822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.277876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.277890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.278132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.278357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.278367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.278375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.278383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.291039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.291659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.291698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.291709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.291958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.292184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.292194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.292202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.292210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.304870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.305530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.305569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.305580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.761 [2024-12-05 13:34:51.305820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.761 [2024-12-05 13:34:51.306054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.761 [2024-12-05 13:34:51.306065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.761 [2024-12-05 13:34:51.306073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.761 [2024-12-05 13:34:51.306081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:28.761 [2024-12-05 13:34:51.318724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:28.761 [2024-12-05 13:34:51.319296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:28.761 [2024-12-05 13:34:51.319339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:28.761 [2024-12-05 13:34:51.319350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:28.762 [2024-12-05 13:34:51.319591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:28.762 [2024-12-05 13:34:51.319816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:28.762 [2024-12-05 13:34:51.319827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:28.762 [2024-12-05 13:34:51.319834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:28.762 [2024-12-05 13:34:51.319843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.023 [2024-12-05 13:34:51.332718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.023 [2024-12-05 13:34:51.333399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.023 [2024-12-05 13:34:51.333437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.023 [2024-12-05 13:34:51.333448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.023 [2024-12-05 13:34:51.333689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.023 [2024-12-05 13:34:51.333922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.023 [2024-12-05 13:34:51.333933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.023 [2024-12-05 13:34:51.333942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.023 [2024-12-05 13:34:51.333950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.023 [2024-12-05 13:34:51.346598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.023 [2024-12-05 13:34:51.347255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.023 [2024-12-05 13:34:51.347294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.023 [2024-12-05 13:34:51.347306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.023 [2024-12-05 13:34:51.347546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.023 [2024-12-05 13:34:51.347772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.023 [2024-12-05 13:34:51.347782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.023 [2024-12-05 13:34:51.347790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.023 [2024-12-05 13:34:51.347798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.023 [2024-12-05 13:34:51.360452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.023 [2024-12-05 13:34:51.361144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.023 [2024-12-05 13:34:51.361183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.023 [2024-12-05 13:34:51.361195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.023 [2024-12-05 13:34:51.361441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.023 [2024-12-05 13:34:51.361677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.023 [2024-12-05 13:34:51.361688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.023 [2024-12-05 13:34:51.361695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.023 [2024-12-05 13:34:51.361703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.023 [2024-12-05 13:34:51.374357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.023 [2024-12-05 13:34:51.374957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.374996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.375009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.375252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.375477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.375487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.375495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.375503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.388377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.388962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.389003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.389015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.389257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.389482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.389492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.389500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.389508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.402378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.402991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.403030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.403041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.403281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.403507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.403521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.403529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.403537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.416402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.417002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.417041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.417052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.417293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.417518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.417527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.417535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.417543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.430407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.431088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.431128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.431140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.431382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.431608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.431618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.431626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.431635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.444293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.444842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.444867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.444876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.445098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.445320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.445329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.445336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.445343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.458204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.458844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.458890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.458902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.459143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.459368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.459378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.459387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.459395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.472053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.472589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.472609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.472618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.472839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.473068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.473078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.473085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.473092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.485948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.486616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.486656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.486667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.486916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.487143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.487154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.487161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.487170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.024 [2024-12-05 13:34:51.499822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.024 [2024-12-05 13:34:51.500383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.024 [2024-12-05 13:34:51.500427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.024 [2024-12-05 13:34:51.500438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.024 [2024-12-05 13:34:51.500679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.024 [2024-12-05 13:34:51.500914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.024 [2024-12-05 13:34:51.500925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.024 [2024-12-05 13:34:51.500934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.024 [2024-12-05 13:34:51.500942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.025 [2024-12-05 13:34:51.513797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.025 [2024-12-05 13:34:51.514318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.025 [2024-12-05 13:34:51.514358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.025 [2024-12-05 13:34:51.514371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.025 [2024-12-05 13:34:51.514613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.025 [2024-12-05 13:34:51.514838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.025 [2024-12-05 13:34:51.514848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.025 [2024-12-05 13:34:51.514855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.025 [2024-12-05 13:34:51.514871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.025 [2024-12-05 13:34:51.527730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.025 [2024-12-05 13:34:51.528317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.025 [2024-12-05 13:34:51.528337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.025 [2024-12-05 13:34:51.528345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.025 [2024-12-05 13:34:51.528566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.025 [2024-12-05 13:34:51.528788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.025 [2024-12-05 13:34:51.528797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.025 [2024-12-05 13:34:51.528804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.025 [2024-12-05 13:34:51.528811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.025 [2024-12-05 13:34:51.541664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.025 [2024-12-05 13:34:51.542135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.025 [2024-12-05 13:34:51.542152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.025 [2024-12-05 13:34:51.542160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.025 [2024-12-05 13:34:51.542386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.025 [2024-12-05 13:34:51.542607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.025 [2024-12-05 13:34:51.542616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.025 [2024-12-05 13:34:51.542623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.025 [2024-12-05 13:34:51.542630] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.025 [2024-12-05 13:34:51.555480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.025 [2024-12-05 13:34:51.556162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.025 [2024-12-05 13:34:51.556201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.025 [2024-12-05 13:34:51.556213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.025 [2024-12-05 13:34:51.556453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.025 [2024-12-05 13:34:51.556678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.025 [2024-12-05 13:34:51.556688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.025 [2024-12-05 13:34:51.556696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.025 [2024-12-05 13:34:51.556705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.025 [2024-12-05 13:34:51.569368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.025 [2024-12-05 13:34:51.569965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.025 [2024-12-05 13:34:51.570005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.025 [2024-12-05 13:34:51.570018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.025 [2024-12-05 13:34:51.570262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.025 [2024-12-05 13:34:51.570488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.025 [2024-12-05 13:34:51.570498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.025 [2024-12-05 13:34:51.570506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.025 [2024-12-05 13:34:51.570514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.025 [2024-12-05 13:34:51.583394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.025 [2024-12-05 13:34:51.584172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.025 [2024-12-05 13:34:51.584212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.025 [2024-12-05 13:34:51.584225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.025 [2024-12-05 13:34:51.584466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.025 [2024-12-05 13:34:51.584692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.025 [2024-12-05 13:34:51.584707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.025 [2024-12-05 13:34:51.584716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.025 [2024-12-05 13:34:51.584724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.288 [2024-12-05 13:34:51.597391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.288 [2024-12-05 13:34:51.597969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-12-05 13:34:51.598008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.288 [2024-12-05 13:34:51.598020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.288 [2024-12-05 13:34:51.598264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.288 [2024-12-05 13:34:51.598489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.288 [2024-12-05 13:34:51.598499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.288 [2024-12-05 13:34:51.598507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.288 [2024-12-05 13:34:51.598515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.288 [2024-12-05 13:34:51.611380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.288 [2024-12-05 13:34:51.611969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-12-05 13:34:51.612008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.288 [2024-12-05 13:34:51.612020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.288 [2024-12-05 13:34:51.612260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.288 [2024-12-05 13:34:51.612486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.288 [2024-12-05 13:34:51.612496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.288 [2024-12-05 13:34:51.612504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.288 [2024-12-05 13:34:51.612512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.288 [2024-12-05 13:34:51.625374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.288 [2024-12-05 13:34:51.625970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-12-05 13:34:51.626010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.288 [2024-12-05 13:34:51.626022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.288 [2024-12-05 13:34:51.626265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.288 [2024-12-05 13:34:51.626490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.288 [2024-12-05 13:34:51.626500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.288 [2024-12-05 13:34:51.626508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.288 [2024-12-05 13:34:51.626516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.288 [2024-12-05 13:34:51.639392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.288 [2024-12-05 13:34:51.639982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-12-05 13:34:51.640021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.288 [2024-12-05 13:34:51.640034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.288 [2024-12-05 13:34:51.640277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.288 [2024-12-05 13:34:51.640503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.288 [2024-12-05 13:34:51.640513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.288 [2024-12-05 13:34:51.640521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.288 [2024-12-05 13:34:51.640529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.288 [2024-12-05 13:34:51.653401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.288 [2024-12-05 13:34:51.653986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-12-05 13:34:51.654026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.288 [2024-12-05 13:34:51.654039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.288 [2024-12-05 13:34:51.654282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.288 [2024-12-05 13:34:51.654507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.288 [2024-12-05 13:34:51.654517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.288 [2024-12-05 13:34:51.654525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.288 [2024-12-05 13:34:51.654534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.288 [2024-12-05 13:34:51.667404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.288 [2024-12-05 13:34:51.667961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-12-05 13:34:51.667982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.288 [2024-12-05 13:34:51.667991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.288 [2024-12-05 13:34:51.668213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.288 [2024-12-05 13:34:51.668434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.288 [2024-12-05 13:34:51.668443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.288 [2024-12-05 13:34:51.668450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.288 [2024-12-05 13:34:51.668458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.288 [2024-12-05 13:34:51.681379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.288 [2024-12-05 13:34:51.681969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.288 [2024-12-05 13:34:51.682013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.288 [2024-12-05 13:34:51.682025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.288 [2024-12-05 13:34:51.682265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.288 [2024-12-05 13:34:51.682490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.288 [2024-12-05 13:34:51.682501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.288 [2024-12-05 13:34:51.682509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.288 [2024-12-05 13:34:51.682517] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.695392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.695968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.696008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.696020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.696264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.289 [2024-12-05 13:34:51.696490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.289 [2024-12-05 13:34:51.696500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.289 [2024-12-05 13:34:51.696507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.289 [2024-12-05 13:34:51.696516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.709383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.709985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.710024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.710036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.710276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.289 [2024-12-05 13:34:51.710501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.289 [2024-12-05 13:34:51.710512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.289 [2024-12-05 13:34:51.710520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.289 [2024-12-05 13:34:51.710528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.723395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.723997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.724036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.724048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.724297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.289 [2024-12-05 13:34:51.724522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.289 [2024-12-05 13:34:51.724533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.289 [2024-12-05 13:34:51.724541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.289 [2024-12-05 13:34:51.724549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.737415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.738110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.738149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.738160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.738401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.289 [2024-12-05 13:34:51.738626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.289 [2024-12-05 13:34:51.738637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.289 [2024-12-05 13:34:51.738645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.289 [2024-12-05 13:34:51.738653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.751311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.752005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.752045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.752057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.752299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.289 [2024-12-05 13:34:51.752525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.289 [2024-12-05 13:34:51.752535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.289 [2024-12-05 13:34:51.752544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.289 [2024-12-05 13:34:51.752553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.765234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.765908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.765948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.765959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.766200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.289 [2024-12-05 13:34:51.766425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.289 [2024-12-05 13:34:51.766441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.289 [2024-12-05 13:34:51.766450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.289 [2024-12-05 13:34:51.766458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.779130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.779785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.779825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.779837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.780088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.289 [2024-12-05 13:34:51.780314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.289 [2024-12-05 13:34:51.780324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.289 [2024-12-05 13:34:51.780332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.289 [2024-12-05 13:34:51.780340] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.289 [2024-12-05 13:34:51.792986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.289 [2024-12-05 13:34:51.793666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.289 [2024-12-05 13:34:51.793704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.289 [2024-12-05 13:34:51.793715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.289 [2024-12-05 13:34:51.793969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.290 [2024-12-05 13:34:51.794196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.290 [2024-12-05 13:34:51.794206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.290 [2024-12-05 13:34:51.794214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.290 [2024-12-05 13:34:51.794222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.290 [2024-12-05 13:34:51.806977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.290 [2024-12-05 13:34:51.807656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-12-05 13:34:51.807695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.290 [2024-12-05 13:34:51.807706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.290 [2024-12-05 13:34:51.807955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.290 [2024-12-05 13:34:51.808182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.290 [2024-12-05 13:34:51.808192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.290 [2024-12-05 13:34:51.808200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.290 [2024-12-05 13:34:51.808208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.290 [2024-12-05 13:34:51.820868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.290 [2024-12-05 13:34:51.821553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-12-05 13:34:51.821592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.290 [2024-12-05 13:34:51.821604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.290 [2024-12-05 13:34:51.821844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.290 [2024-12-05 13:34:51.822080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.290 [2024-12-05 13:34:51.822091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.290 [2024-12-05 13:34:51.822099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.290 [2024-12-05 13:34:51.822107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.290 [2024-12-05 13:34:51.834752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.290 [2024-12-05 13:34:51.835411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-12-05 13:34:51.835450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.290 [2024-12-05 13:34:51.835462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.290 [2024-12-05 13:34:51.835702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.290 [2024-12-05 13:34:51.835940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.290 [2024-12-05 13:34:51.835951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.290 [2024-12-05 13:34:51.835959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.290 [2024-12-05 13:34:51.835968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.290 [2024-12-05 13:34:51.848621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.290 [2024-12-05 13:34:51.849305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.290 [2024-12-05 13:34:51.849345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.290 [2024-12-05 13:34:51.849356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.290 [2024-12-05 13:34:51.849597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.290 [2024-12-05 13:34:51.849821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.290 [2024-12-05 13:34:51.849832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.290 [2024-12-05 13:34:51.849839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.290 [2024-12-05 13:34:51.849848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.551 [2024-12-05 13:34:51.862506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.551 [2024-12-05 13:34:51.863080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.551 [2024-12-05 13:34:51.863109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.551 [2024-12-05 13:34:51.863117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.551 [2024-12-05 13:34:51.863339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.863570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.863580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.863588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.863595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.876446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.877012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.877030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.877038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.552 [2024-12-05 13:34:51.877259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.877480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.877489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.877496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.877503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.890359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.890908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.890932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.890940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.552 [2024-12-05 13:34:51.891164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.891386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.891395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.891402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.891409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.904264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.904927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.904967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.904979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.552 [2024-12-05 13:34:51.905225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.905452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.905462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.905470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.905478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.918142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.918820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.918860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.918881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.552 [2024-12-05 13:34:51.919123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.919348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.919358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.919366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.919374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.932026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.932671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.932710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.932721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.552 [2024-12-05 13:34:51.932969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.933195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.933205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.933214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.933222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.945872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.946547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.946586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.946597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.552 [2024-12-05 13:34:51.946838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.947074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.947089] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.947097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.947105] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.959753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.960451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.960490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.960501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.552 [2024-12-05 13:34:51.960742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.552 [2024-12-05 13:34:51.960976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.552 [2024-12-05 13:34:51.960986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.552 [2024-12-05 13:34:51.960995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.552 [2024-12-05 13:34:51.961003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.552 [2024-12-05 13:34:51.973670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.552 [2024-12-05 13:34:51.974215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.552 [2024-12-05 13:34:51.974254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.552 [2024-12-05 13:34:51.974265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:51.974505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:51.974731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:51.974742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.553 [2024-12-05 13:34:51.974750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.553 [2024-12-05 13:34:51.974758] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.553 [2024-12-05 13:34:51.987638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.553 [2024-12-05 13:34:51.988269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.553 [2024-12-05 13:34:51.988308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.553 [2024-12-05 13:34:51.988319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:51.988560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:51.988785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:51.988795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.553 [2024-12-05 13:34:51.988803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.553 [2024-12-05 13:34:51.988811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.553 [2024-12-05 13:34:52.001465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.553 [2024-12-05 13:34:52.002013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.553 [2024-12-05 13:34:52.002052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.553 [2024-12-05 13:34:52.002064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:52.002308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:52.002534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:52.002545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.553 [2024-12-05 13:34:52.002554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.553 [2024-12-05 13:34:52.002563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.553 [2024-12-05 13:34:52.015428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.553 [2024-12-05 13:34:52.015990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.553 [2024-12-05 13:34:52.016029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.553 [2024-12-05 13:34:52.016043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:52.016287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:52.016512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:52.016521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.553 [2024-12-05 13:34:52.016530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.553 [2024-12-05 13:34:52.016539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.553 [2024-12-05 13:34:52.029403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.553 [2024-12-05 13:34:52.029987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.553 [2024-12-05 13:34:52.030026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.553 [2024-12-05 13:34:52.030039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:52.030281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:52.030507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:52.030516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.553 [2024-12-05 13:34:52.030524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.553 [2024-12-05 13:34:52.030532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.553 [2024-12-05 13:34:52.043400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.553 [2024-12-05 13:34:52.044069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.553 [2024-12-05 13:34:52.044113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.553 [2024-12-05 13:34:52.044125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:52.044365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:52.044590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:52.044600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.553 [2024-12-05 13:34:52.044608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.553 [2024-12-05 13:34:52.044616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.553 5976.40 IOPS, 23.35 MiB/s [2024-12-05T12:34:52.121Z] [2024-12-05 13:34:52.057673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.553 [2024-12-05 13:34:52.058349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.553 [2024-12-05 13:34:52.058388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.553 [2024-12-05 13:34:52.058399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:52.058639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:52.058871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:52.058882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.553 [2024-12-05 13:34:52.058890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.553 [2024-12-05 13:34:52.058898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.553 [2024-12-05 13:34:52.071568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.553 [2024-12-05 13:34:52.072208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.553 [2024-12-05 13:34:52.072247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.553 [2024-12-05 13:34:52.072258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.553 [2024-12-05 13:34:52.072498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.553 [2024-12-05 13:34:52.072723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.553 [2024-12-05 13:34:52.072733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.554 [2024-12-05 13:34:52.072741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.554 [2024-12-05 13:34:52.072750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.554 [2024-12-05 13:34:52.085408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.554 [2024-12-05 13:34:52.086137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.554 [2024-12-05 13:34:52.086176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.554 [2024-12-05 13:34:52.086188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.554 [2024-12-05 13:34:52.086433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.554 [2024-12-05 13:34:52.086658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.554 [2024-12-05 13:34:52.086668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.554 [2024-12-05 13:34:52.086676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.554 [2024-12-05 13:34:52.086684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.554 [2024-12-05 13:34:52.099338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.554 [2024-12-05 13:34:52.099964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.554 [2024-12-05 13:34:52.100003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.554 [2024-12-05 13:34:52.100016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.554 [2024-12-05 13:34:52.100259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.554 [2024-12-05 13:34:52.100485] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.554 [2024-12-05 13:34:52.100495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.554 [2024-12-05 13:34:52.100503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.554 [2024-12-05 13:34:52.100511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.554 [2024-12-05 13:34:52.113165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.554 [2024-12-05 13:34:52.113799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.554 [2024-12-05 13:34:52.113838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.554 [2024-12-05 13:34:52.113850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.554 [2024-12-05 13:34:52.114100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.554 [2024-12-05 13:34:52.114326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.554 [2024-12-05 13:34:52.114336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.554 [2024-12-05 13:34:52.114344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.554 [2024-12-05 13:34:52.114352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.817 [2024-12-05 13:34:52.127008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.817 [2024-12-05 13:34:52.127685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.817 [2024-12-05 13:34:52.127724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.817 [2024-12-05 13:34:52.127735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.817 [2024-12-05 13:34:52.127984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.817 [2024-12-05 13:34:52.128210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.817 [2024-12-05 13:34:52.128225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.817 [2024-12-05 13:34:52.128233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.817 [2024-12-05 13:34:52.128241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.817 [2024-12-05 13:34:52.140893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.817 [2024-12-05 13:34:52.141560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.817 [2024-12-05 13:34:52.141598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.817 [2024-12-05 13:34:52.141610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.817 [2024-12-05 13:34:52.141851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.817 [2024-12-05 13:34:52.142085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.817 [2024-12-05 13:34:52.142096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.817 [2024-12-05 13:34:52.142104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.817 [2024-12-05 13:34:52.142112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.817 [2024-12-05 13:34:52.154752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.817 [2024-12-05 13:34:52.155426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.817 [2024-12-05 13:34:52.155466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.817 [2024-12-05 13:34:52.155477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.817 [2024-12-05 13:34:52.155717] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.817 [2024-12-05 13:34:52.155951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.817 [2024-12-05 13:34:52.155962] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.817 [2024-12-05 13:34:52.155970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.817 [2024-12-05 13:34:52.155978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.817 [2024-12-05 13:34:52.168630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.817 [2024-12-05 13:34:52.169187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.817 [2024-12-05 13:34:52.169208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.817 [2024-12-05 13:34:52.169217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.817 [2024-12-05 13:34:52.169438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.817 [2024-12-05 13:34:52.169660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.817 [2024-12-05 13:34:52.169669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.817 [2024-12-05 13:34:52.169677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.817 [2024-12-05 13:34:52.169688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.817 [2024-12-05 13:34:52.182558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.817 [2024-12-05 13:34:52.183183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.817 [2024-12-05 13:34:52.183223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.817 [2024-12-05 13:34:52.183234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.817 [2024-12-05 13:34:52.183475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.817 [2024-12-05 13:34:52.183701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.817 [2024-12-05 13:34:52.183711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.817 [2024-12-05 13:34:52.183719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.817 [2024-12-05 13:34:52.183727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.817 [2024-12-05 13:34:52.196411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.817 [2024-12-05 13:34:52.196986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.817 [2024-12-05 13:34:52.197024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.817 [2024-12-05 13:34:52.197037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.197278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.197504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.197514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.197522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.197531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.818 [2024-12-05 13:34:52.210401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.818 [2024-12-05 13:34:52.210976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.818 [2024-12-05 13:34:52.211015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.818 [2024-12-05 13:34:52.211028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.211273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.211498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.211507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.211515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.211524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.818 [2024-12-05 13:34:52.224384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.818 [2024-12-05 13:34:52.224982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.818 [2024-12-05 13:34:52.225026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.818 [2024-12-05 13:34:52.225039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.225283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.225508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.225518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.225526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.225534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.818 [2024-12-05 13:34:52.238400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.818 [2024-12-05 13:34:52.239101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.818 [2024-12-05 13:34:52.239141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.818 [2024-12-05 13:34:52.239152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.239392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.239618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.239628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.239636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.239644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.818 [2024-12-05 13:34:52.252297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.818 [2024-12-05 13:34:52.252955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.818 [2024-12-05 13:34:52.252994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.818 [2024-12-05 13:34:52.253007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.253249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.253474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.253486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.253494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.253502] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.818 [2024-12-05 13:34:52.266172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.818 [2024-12-05 13:34:52.266813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.818 [2024-12-05 13:34:52.266852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.818 [2024-12-05 13:34:52.266872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.267120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.267346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.267356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.267365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.267373] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.818 [2024-12-05 13:34:52.280026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.818 [2024-12-05 13:34:52.280699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.818 [2024-12-05 13:34:52.280738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.818 [2024-12-05 13:34:52.280749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.280999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.281225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.281235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.281243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.281251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.818 [2024-12-05 13:34:52.293902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.818 [2024-12-05 13:34:52.294573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.818 [2024-12-05 13:34:52.294612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.818 [2024-12-05 13:34:52.294623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.818 [2024-12-05 13:34:52.294874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.818 [2024-12-05 13:34:52.295100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.818 [2024-12-05 13:34:52.295110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.818 [2024-12-05 13:34:52.295119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.818 [2024-12-05 13:34:52.295126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.819 [2024-12-05 13:34:52.307779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.819 [2024-12-05 13:34:52.308439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-05 13:34:52.308478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.819 [2024-12-05 13:34:52.308489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.819 [2024-12-05 13:34:52.308730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.819 [2024-12-05 13:34:52.308962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.819 [2024-12-05 13:34:52.308981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.819 [2024-12-05 13:34:52.308989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.819 [2024-12-05 13:34:52.308997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.819 [2024-12-05 13:34:52.321651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.819 [2024-12-05 13:34:52.322211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-05 13:34:52.322250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.819 [2024-12-05 13:34:52.322262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.819 [2024-12-05 13:34:52.322502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.819 [2024-12-05 13:34:52.322727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.819 [2024-12-05 13:34:52.322737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.819 [2024-12-05 13:34:52.322745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.819 [2024-12-05 13:34:52.322753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.819 [2024-12-05 13:34:52.335627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.819 [2024-12-05 13:34:52.336310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-05 13:34:52.336349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.819 [2024-12-05 13:34:52.336360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.819 [2024-12-05 13:34:52.336601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.819 [2024-12-05 13:34:52.336826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.819 [2024-12-05 13:34:52.336836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.819 [2024-12-05 13:34:52.336843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.819 [2024-12-05 13:34:52.336852] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.819 [2024-12-05 13:34:52.349503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.819 [2024-12-05 13:34:52.350008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-05 13:34:52.350029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.819 [2024-12-05 13:34:52.350038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.819 [2024-12-05 13:34:52.350260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.819 [2024-12-05 13:34:52.350481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.819 [2024-12-05 13:34:52.350490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.819 [2024-12-05 13:34:52.350498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.819 [2024-12-05 13:34:52.350505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.819 [2024-12-05 13:34:52.363358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.819 [2024-12-05 13:34:52.363966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-05 13:34:52.364005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.819 [2024-12-05 13:34:52.364018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.819 [2024-12-05 13:34:52.364260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.819 [2024-12-05 13:34:52.364486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.819 [2024-12-05 13:34:52.364496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.819 [2024-12-05 13:34:52.364504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.819 [2024-12-05 13:34:52.364512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:29.819 [2024-12-05 13:34:52.377385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:29.819 [2024-12-05 13:34:52.377935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:29.819 [2024-12-05 13:34:52.377956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:29.819 [2024-12-05 13:34:52.377965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:29.819 [2024-12-05 13:34:52.378187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:29.819 [2024-12-05 13:34:52.378409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:29.819 [2024-12-05 13:34:52.378418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:29.819 [2024-12-05 13:34:52.378426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:29.819 [2024-12-05 13:34:52.378433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.082 [2024-12-05 13:34:52.391299] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.082 [2024-12-05 13:34:52.391978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.082 [2024-12-05 13:34:52.392016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.082 [2024-12-05 13:34:52.392029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.082 [2024-12-05 13:34:52.392274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.082 [2024-12-05 13:34:52.392499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.082 [2024-12-05 13:34:52.392510] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.082 [2024-12-05 13:34:52.392518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.082 [2024-12-05 13:34:52.392526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.082 [2024-12-05 13:34:52.405182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.082 [2024-12-05 13:34:52.405856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.082 [2024-12-05 13:34:52.405906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.082 [2024-12-05 13:34:52.405919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.082 [2024-12-05 13:34:52.406160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.082 [2024-12-05 13:34:52.406385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.082 [2024-12-05 13:34:52.406396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.082 [2024-12-05 13:34:52.406404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.082 [2024-12-05 13:34:52.406412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.082 [2024-12-05 13:34:52.419064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.082 [2024-12-05 13:34:52.419515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.082 [2024-12-05 13:34:52.419535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.082 [2024-12-05 13:34:52.419543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.082 [2024-12-05 13:34:52.419764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.082 [2024-12-05 13:34:52.419992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.082 [2024-12-05 13:34:52.420003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.082 [2024-12-05 13:34:52.420010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.082 [2024-12-05 13:34:52.420017] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.082 [2024-12-05 13:34:52.432915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.082 [2024-12-05 13:34:52.433571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.082 [2024-12-05 13:34:52.433610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.082 [2024-12-05 13:34:52.433623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.082 [2024-12-05 13:34:52.433872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.082 [2024-12-05 13:34:52.434099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.082 [2024-12-05 13:34:52.434109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.082 [2024-12-05 13:34:52.434117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.082 [2024-12-05 13:34:52.434125] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.082 [2024-12-05 13:34:52.446774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.082 [2024-12-05 13:34:52.447423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.082 [2024-12-05 13:34:52.447463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.082 [2024-12-05 13:34:52.447475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.447723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.447957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.447968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.447976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.447984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.460640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.461121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.083 [2024-12-05 13:34:52.461143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.083 [2024-12-05 13:34:52.461151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.461373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.461595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.461605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.461613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.461620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.474490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.475185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.083 [2024-12-05 13:34:52.475224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.083 [2024-12-05 13:34:52.475236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.475477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.475703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.475712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.475720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.475728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.488398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.489099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.083 [2024-12-05 13:34:52.489139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.083 [2024-12-05 13:34:52.489150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.489391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.489616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.489630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.489638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.489646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.502311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.502921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.083 [2024-12-05 13:34:52.502948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.083 [2024-12-05 13:34:52.502957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.503182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.503406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.503415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.503423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.503430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.516286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.516840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.083 [2024-12-05 13:34:52.516858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.083 [2024-12-05 13:34:52.516872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.517093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.517315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.517325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.517333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.517341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.530202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.530882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.083 [2024-12-05 13:34:52.530921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.083 [2024-12-05 13:34:52.530932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.531173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.531398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.531408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.531416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.531424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.544083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.544627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.083 [2024-12-05 13:34:52.544647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.083 [2024-12-05 13:34:52.544656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.083 [2024-12-05 13:34:52.544883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.083 [2024-12-05 13:34:52.545106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.083 [2024-12-05 13:34:52.545115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.083 [2024-12-05 13:34:52.545123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.083 [2024-12-05 13:34:52.545130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.083 [2024-12-05 13:34:52.557985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.083 [2024-12-05 13:34:52.558549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.084 [2024-12-05 13:34:52.558588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.084 [2024-12-05 13:34:52.558599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.084 [2024-12-05 13:34:52.558840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.084 [2024-12-05 13:34:52.559073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.084 [2024-12-05 13:34:52.559084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.084 [2024-12-05 13:34:52.559092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.084 [2024-12-05 13:34:52.559100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.084 [2024-12-05 13:34:52.571981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.084 [2024-12-05 13:34:52.572524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.084 [2024-12-05 13:34:52.572545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.084 [2024-12-05 13:34:52.572554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.084 [2024-12-05 13:34:52.572775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.084 [2024-12-05 13:34:52.573004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.084 [2024-12-05 13:34:52.573017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.084 [2024-12-05 13:34:52.573024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.084 [2024-12-05 13:34:52.573031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.084 [2024-12-05 13:34:52.585898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.084 [2024-12-05 13:34:52.586516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.084 [2024-12-05 13:34:52.586560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.084 [2024-12-05 13:34:52.586571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.084 [2024-12-05 13:34:52.586811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.084 [2024-12-05 13:34:52.587046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.084 [2024-12-05 13:34:52.587057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.084 [2024-12-05 13:34:52.587066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.084 [2024-12-05 13:34:52.587074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.084 [2024-12-05 13:34:52.599727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.084 [2024-12-05 13:34:52.600382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.084 [2024-12-05 13:34:52.600421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.084 [2024-12-05 13:34:52.600432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.084 [2024-12-05 13:34:52.600672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.084 [2024-12-05 13:34:52.600906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.084 [2024-12-05 13:34:52.600917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.084 [2024-12-05 13:34:52.600925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.084 [2024-12-05 13:34:52.600934] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.084 [2024-12-05 13:34:52.613579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.084 [2024-12-05 13:34:52.614257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.084 [2024-12-05 13:34:52.614297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.084 [2024-12-05 13:34:52.614308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.084 [2024-12-05 13:34:52.614549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.084 [2024-12-05 13:34:52.614774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.084 [2024-12-05 13:34:52.614784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.084 [2024-12-05 13:34:52.614792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.084 [2024-12-05 13:34:52.614800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.084 [2024-12-05 13:34:52.627464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.084 [2024-12-05 13:34:52.627987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.084 [2024-12-05 13:34:52.628027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.084 [2024-12-05 13:34:52.628040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.084 [2024-12-05 13:34:52.628289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.084 [2024-12-05 13:34:52.628515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.084 [2024-12-05 13:34:52.628525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.084 [2024-12-05 13:34:52.628533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.084 [2024-12-05 13:34:52.628541] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.084 [2024-12-05 13:34:52.641410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.084 [2024-12-05 13:34:52.641974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.084 [2024-12-05 13:34:52.642014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.084 [2024-12-05 13:34:52.642026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.084 [2024-12-05 13:34:52.642268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.084 [2024-12-05 13:34:52.642494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.084 [2024-12-05 13:34:52.642504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.084 [2024-12-05 13:34:52.642512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.084 [2024-12-05 13:34:52.642520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.346 [2024-12-05 13:34:52.655404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.347 [2024-12-05 13:34:52.656185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.347 [2024-12-05 13:34:52.656224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.347 [2024-12-05 13:34:52.656236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.347 [2024-12-05 13:34:52.656477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.347 [2024-12-05 13:34:52.656703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.347 [2024-12-05 13:34:52.656713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.347 [2024-12-05 13:34:52.656722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.347 [2024-12-05 13:34:52.656730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.347 [2024-12-05 13:34:52.669395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.347 [2024-12-05 13:34:52.669983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.347 [2024-12-05 13:34:52.670003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.347 [2024-12-05 13:34:52.670012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.347 [2024-12-05 13:34:52.670234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.347 [2024-12-05 13:34:52.670455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.347 [2024-12-05 13:34:52.670469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.347 [2024-12-05 13:34:52.670477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.347 [2024-12-05 13:34:52.670484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.347 [2024-12-05 13:34:52.683357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.347 [2024-12-05 13:34:52.683892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.347 [2024-12-05 13:34:52.683911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.347 [2024-12-05 13:34:52.683920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.347 [2024-12-05 13:34:52.684141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.347 [2024-12-05 13:34:52.684362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.347 [2024-12-05 13:34:52.684372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.347 [2024-12-05 13:34:52.684380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.347 [2024-12-05 13:34:52.684387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.347 [2024-12-05 13:34:52.697271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.347 [2024-12-05 13:34:52.697838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.347 [2024-12-05 13:34:52.697855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.347 [2024-12-05 13:34:52.697946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.347 [2024-12-05 13:34:52.698169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.347 [2024-12-05 13:34:52.698391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.347 [2024-12-05 13:34:52.698400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.347 [2024-12-05 13:34:52.698408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.347 [2024-12-05 13:34:52.698415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.347 [2024-12-05 13:34:52.711293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.347 [2024-12-05 13:34:52.711858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.347 [2024-12-05 13:34:52.711883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.347 [2024-12-05 13:34:52.711891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.347 [2024-12-05 13:34:52.712112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.347 [2024-12-05 13:34:52.712333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.347 [2024-12-05 13:34:52.712342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.347 [2024-12-05 13:34:52.712350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.347 [2024-12-05 13:34:52.712357] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.347 [2024-12-05 13:34:52.725232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.347 [2024-12-05 13:34:52.725910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.347 [2024-12-05 13:34:52.725949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.347 [2024-12-05 13:34:52.725960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.347 [2024-12-05 13:34:52.726201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.347 [2024-12-05 13:34:52.726427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.347 [2024-12-05 13:34:52.726437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.347 [2024-12-05 13:34:52.726445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.347 [2024-12-05 13:34:52.726453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.347 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1126726 Killed "${NVMF_APP[@]}" "$@" 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:30.347 [2024-12-05 13:34:52.739118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.347 [2024-12-05 13:34:52.739802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.347 [2024-12-05 13:34:52.739842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.347 [2024-12-05 13:34:52.739855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.347 [2024-12-05 13:34:52.740106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.347 [2024-12-05 13:34:52.740331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.347 [2024-12-05 13:34:52.740342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.347 [2024-12-05 13:34:52.740351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.347 [2024-12-05 13:34:52.740359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1128324 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1128324 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:30.347 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1128324 ']' 00:30:30.348 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.348 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.348 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.348 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.348 13:34:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:30.348 [2024-12-05 13:34:52.753022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.348 [2024-12-05 13:34:52.753660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.348 [2024-12-05 13:34:52.753699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.348 [2024-12-05 13:34:52.753710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.348 [2024-12-05 13:34:52.753961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.348 [2024-12-05 13:34:52.754189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.348 [2024-12-05 13:34:52.754200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.348 [2024-12-05 13:34:52.754209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.348 [2024-12-05 13:34:52.754217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.348 [2024-12-05 13:34:52.766875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.348 [2024-12-05 13:34:52.767455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.348 [2024-12-05 13:34:52.767475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.348 [2024-12-05 13:34:52.767484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.348 [2024-12-05 13:34:52.767706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.348 [2024-12-05 13:34:52.767934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.348 [2024-12-05 13:34:52.767946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.348 [2024-12-05 13:34:52.767954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.348 [2024-12-05 13:34:52.767962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.348 [2024-12-05 13:34:52.780840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.348 [2024-12-05 13:34:52.781526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.348 [2024-12-05 13:34:52.781565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.348 [2024-12-05 13:34:52.781577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.348 [2024-12-05 13:34:52.781817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.348 [2024-12-05 13:34:52.782050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.348 [2024-12-05 13:34:52.782062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.348 [2024-12-05 13:34:52.782070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.348 [2024-12-05 13:34:52.782078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.348 [2024-12-05 13:34:52.793044] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:30.348 [2024-12-05 13:34:52.793095] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.348 [2024-12-05 13:34:52.794737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.348 [2024-12-05 13:34:52.795342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.348 [2024-12-05 13:34:52.795362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.348 [2024-12-05 13:34:52.795371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.348 [2024-12-05 13:34:52.795593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.348 [2024-12-05 13:34:52.795814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.348 [2024-12-05 13:34:52.795825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.348 [2024-12-05 13:34:52.795832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.348 [2024-12-05 13:34:52.795840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.348 [2024-12-05 13:34:52.808702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.348 [2024-12-05 13:34:52.809262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.348 [2024-12-05 13:34:52.809280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.348 [2024-12-05 13:34:52.809288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.348 [2024-12-05 13:34:52.809508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.348 [2024-12-05 13:34:52.809729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.349 [2024-12-05 13:34:52.809738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.349 [2024-12-05 13:34:52.809746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.349 [2024-12-05 13:34:52.809753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.349 [2024-12-05 13:34:52.822620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.349 [2024-12-05 13:34:52.823295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.349 [2024-12-05 13:34:52.823335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.349 [2024-12-05 13:34:52.823346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.349 [2024-12-05 13:34:52.823588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.349 [2024-12-05 13:34:52.823814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.349 [2024-12-05 13:34:52.823824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.349 [2024-12-05 13:34:52.823832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.349 [2024-12-05 13:34:52.823841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.349 [2024-12-05 13:34:52.836622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.349 [2024-12-05 13:34:52.837209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.349 [2024-12-05 13:34:52.837230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.349 [2024-12-05 13:34:52.837239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.349 [2024-12-05 13:34:52.837460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.349 [2024-12-05 13:34:52.837682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.349 [2024-12-05 13:34:52.837691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.349 [2024-12-05 13:34:52.837699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.349 [2024-12-05 13:34:52.837706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.349 [2024-12-05 13:34:52.850571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.349 [2024-12-05 13:34:52.851246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.349 [2024-12-05 13:34:52.851285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.349 [2024-12-05 13:34:52.851296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.349 [2024-12-05 13:34:52.851537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.349 [2024-12-05 13:34:52.851762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.349 [2024-12-05 13:34:52.851772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.349 [2024-12-05 13:34:52.851780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.349 [2024-12-05 13:34:52.851789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.349 [2024-12-05 13:34:52.864455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.349 [2024-12-05 13:34:52.865185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.349 [2024-12-05 13:34:52.865224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.349 [2024-12-05 13:34:52.865235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.349 [2024-12-05 13:34:52.865476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.349 [2024-12-05 13:34:52.865702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.349 [2024-12-05 13:34:52.865712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.349 [2024-12-05 13:34:52.865720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.349 [2024-12-05 13:34:52.865729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.349 [2024-12-05 13:34:52.878402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.349 [2024-12-05 13:34:52.878970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.349 [2024-12-05 13:34:52.878992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.349 [2024-12-05 13:34:52.879005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.349 [2024-12-05 13:34:52.879226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.349 [2024-12-05 13:34:52.879448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.349 [2024-12-05 13:34:52.879457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.349 [2024-12-05 13:34:52.879465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.349 [2024-12-05 13:34:52.879472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.349 [2024-12-05 13:34:52.890808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:30.349 [2024-12-05 13:34:52.892338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.349 [2024-12-05 13:34:52.892874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.349 [2024-12-05 13:34:52.892892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.349 [2024-12-05 13:34:52.892900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.349 [2024-12-05 13:34:52.893121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.349 [2024-12-05 13:34:52.893341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.349 [2024-12-05 13:34:52.893351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.349 [2024-12-05 13:34:52.893358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.350 [2024-12-05 13:34:52.893365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.350 [2024-12-05 13:34:52.906239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.350 [2024-12-05 13:34:52.906900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.350 [2024-12-05 13:34:52.906943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.350 [2024-12-05 13:34:52.906956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.350 [2024-12-05 13:34:52.907202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.350 [2024-12-05 13:34:52.907428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.350 [2024-12-05 13:34:52.907438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.350 [2024-12-05 13:34:52.907447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.350 [2024-12-05 13:34:52.907455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:52.920123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:52.920315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.612 [2024-12-05 13:34:52.920336] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.612 [2024-12-05 13:34:52.920343] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.612 [2024-12-05 13:34:52.920348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.612 [2024-12-05 13:34:52.920356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.612 [2024-12-05 13:34:52.920874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:52.920915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:52.920928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.612 [2024-12-05 13:34:52.921172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.612 [2024-12-05 13:34:52.921397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.612 [2024-12-05 13:34:52.921407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.612 [2024-12-05 13:34:52.921415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.612 [2024-12-05 13:34:52.921423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:52.921586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:30.612 [2024-12-05 13:34:52.921747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.612 [2024-12-05 13:34:52.921749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:30.612 [2024-12-05 13:34:52.934084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:52.934709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:52.934731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:52.934740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.612 [2024-12-05 13:34:52.934968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.612 [2024-12-05 13:34:52.935190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.612 [2024-12-05 13:34:52.935199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.612 [2024-12-05 13:34:52.935207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.612 [2024-12-05 13:34:52.935215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:52.948085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:52.948697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:52.948716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:52.948725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.612 [2024-12-05 13:34:52.948951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.612 [2024-12-05 13:34:52.949173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.612 [2024-12-05 13:34:52.949183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.612 [2024-12-05 13:34:52.949190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.612 [2024-12-05 13:34:52.949197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:52.962066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:52.962672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:52.962691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:52.962700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.612 [2024-12-05 13:34:52.962926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.612 [2024-12-05 13:34:52.963148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.612 [2024-12-05 13:34:52.963158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.612 [2024-12-05 13:34:52.963166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.612 [2024-12-05 13:34:52.963173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:52.976045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:52.976473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:52.976490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:52.976498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.612 [2024-12-05 13:34:52.976719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.612 [2024-12-05 13:34:52.976946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.612 [2024-12-05 13:34:52.976956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.612 [2024-12-05 13:34:52.976964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.612 [2024-12-05 13:34:52.976970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:52.990053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:52.990488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:52.990505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:52.990513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.612 [2024-12-05 13:34:52.990733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.612 [2024-12-05 13:34:52.990961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.612 [2024-12-05 13:34:52.990971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.612 [2024-12-05 13:34:52.990979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.612 [2024-12-05 13:34:52.990986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:53.004049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:53.004479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:53.004496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:53.004508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.612 [2024-12-05 13:34:53.004728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.612 [2024-12-05 13:34:53.004958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.612 [2024-12-05 13:34:53.004970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.612 [2024-12-05 13:34:53.004978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.612 [2024-12-05 13:34:53.004986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.612 [2024-12-05 13:34:53.018055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.612 [2024-12-05 13:34:53.018631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.612 [2024-12-05 13:34:53.018648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.612 [2024-12-05 13:34:53.018656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.018880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.019103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.019120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.019129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.019138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.031992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.032668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.032712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.032723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.032978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.033205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.033215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.033224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.033232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.045888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.046422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.046462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.046473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.046715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.046953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.046964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.046973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.046982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 4980.33 IOPS, 19.45 MiB/s [2024-12-05T12:34:53.181Z] [2024-12-05 13:34:53.060034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.060630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.060650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.060659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.060888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.061110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.061120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.061128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.061135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.073993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.074685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.074725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.074738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.074990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.075216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.075227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.075235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.075243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.087909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.088458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.088479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.088487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.088709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.088935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.088946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.088958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.088965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.101817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.102479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.102519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.102530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.102770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.103005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.103015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.103023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.103032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.115693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.116402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.116442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.116453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.116694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.116926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.116937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.116945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.116953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.129599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.130156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.130176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.130185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.130406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.130627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.130637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.130645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.130653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.143509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.144171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.144210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.613 [2024-12-05 13:34:53.144222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.613 [2024-12-05 13:34:53.144462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.613 [2024-12-05 13:34:53.144688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.613 [2024-12-05 13:34:53.144698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.613 [2024-12-05 13:34:53.144706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.613 [2024-12-05 13:34:53.144714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.613 [2024-12-05 13:34:53.157370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.613 [2024-12-05 13:34:53.158079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.613 [2024-12-05 13:34:53.158118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.614 [2024-12-05 13:34:53.158129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.614 [2024-12-05 13:34:53.158370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.614 [2024-12-05 13:34:53.158596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.614 [2024-12-05 13:34:53.158605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.614 [2024-12-05 13:34:53.158613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.614 [2024-12-05 13:34:53.158621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.614 [2024-12-05 13:34:53.171284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.614 [2024-12-05 13:34:53.171870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.614 [2024-12-05 13:34:53.171910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.614 [2024-12-05 13:34:53.171922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.614 [2024-12-05 13:34:53.172164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.614 [2024-12-05 13:34:53.172390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.614 [2024-12-05 13:34:53.172400] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.614 [2024-12-05 13:34:53.172408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.614 [2024-12-05 13:34:53.172416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.876 [2024-12-05 13:34:53.185293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.876 [2024-12-05 13:34:53.185966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.876 [2024-12-05 13:34:53.186010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.876 [2024-12-05 13:34:53.186023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.876 [2024-12-05 13:34:53.186267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.876 [2024-12-05 13:34:53.186491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.186503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.186511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.186519] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.199173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.877 [2024-12-05 13:34:53.199725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.877 [2024-12-05 13:34:53.199744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.877 [2024-12-05 13:34:53.199753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.877 [2024-12-05 13:34:53.199980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.877 [2024-12-05 13:34:53.200202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.200213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.200220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.200227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.213079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.877 [2024-12-05 13:34:53.213496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.877 [2024-12-05 13:34:53.213514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.877 [2024-12-05 13:34:53.213522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.877 [2024-12-05 13:34:53.213742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.877 [2024-12-05 13:34:53.213968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.213977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.213986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.213993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.227054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.877 [2024-12-05 13:34:53.227588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.877 [2024-12-05 13:34:53.227604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.877 [2024-12-05 13:34:53.227612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.877 [2024-12-05 13:34:53.227833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.877 [2024-12-05 13:34:53.228064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.228075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.228082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.228089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.240934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.877 [2024-12-05 13:34:53.241569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.877 [2024-12-05 13:34:53.241609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.877 [2024-12-05 13:34:53.241620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.877 [2024-12-05 13:34:53.241860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.877 [2024-12-05 13:34:53.242094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.242104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.242112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.242120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.254771] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.877 [2024-12-05 13:34:53.255467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.877 [2024-12-05 13:34:53.255506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.877 [2024-12-05 13:34:53.255518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.877 [2024-12-05 13:34:53.255759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.877 [2024-12-05 13:34:53.255992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.256004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.256011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.256019] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.268683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.877 [2024-12-05 13:34:53.269375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.877 [2024-12-05 13:34:53.269415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.877 [2024-12-05 13:34:53.269426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.877 [2024-12-05 13:34:53.269668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.877 [2024-12-05 13:34:53.269902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.269913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.269926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.269936] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.282596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.877 [2024-12-05 13:34:53.283167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.877 [2024-12-05 13:34:53.283187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.877 [2024-12-05 13:34:53.283196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.877 [2024-12-05 13:34:53.283417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.877 [2024-12-05 13:34:53.283638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.877 [2024-12-05 13:34:53.283647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.877 [2024-12-05 13:34:53.283654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.877 [2024-12-05 13:34:53.283661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.877 [2024-12-05 13:34:53.296520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.297209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.297249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.297260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.297500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.297726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.878 [2024-12-05 13:34:53.297736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.878 [2024-12-05 13:34:53.297744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.878 [2024-12-05 13:34:53.297752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.878 [2024-12-05 13:34:53.310398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.310929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.310969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.310982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.311224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.311449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.878 [2024-12-05 13:34:53.311460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.878 [2024-12-05 13:34:53.311468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.878 [2024-12-05 13:34:53.311476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.878 [2024-12-05 13:34:53.324340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.324966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.325005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.325018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.325262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.325487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.878 [2024-12-05 13:34:53.325497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.878 [2024-12-05 13:34:53.325506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.878 [2024-12-05 13:34:53.325515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.878 [2024-12-05 13:34:53.338171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.338794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.338833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.338844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.339094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.339320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.878 [2024-12-05 13:34:53.339330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.878 [2024-12-05 13:34:53.339338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.878 [2024-12-05 13:34:53.339347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.878 [2024-12-05 13:34:53.352193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.352910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.352951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.352963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.353206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.353431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.878 [2024-12-05 13:34:53.353441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.878 [2024-12-05 13:34:53.353449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.878 [2024-12-05 13:34:53.353457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.878 [2024-12-05 13:34:53.366119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.366679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.366718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.366735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.366985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.367211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.878 [2024-12-05 13:34:53.367222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.878 [2024-12-05 13:34:53.367230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.878 [2024-12-05 13:34:53.367238] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.878 [2024-12-05 13:34:53.380108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.380808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.380847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.380859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.381109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.381334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.878 [2024-12-05 13:34:53.381344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.878 [2024-12-05 13:34:53.381352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.878 [2024-12-05 13:34:53.381360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.878 [2024-12-05 13:34:53.394009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.878 [2024-12-05 13:34:53.394704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.878 [2024-12-05 13:34:53.394742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.878 [2024-12-05 13:34:53.394754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.878 [2024-12-05 13:34:53.395003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.878 [2024-12-05 13:34:53.395229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.879 [2024-12-05 13:34:53.395239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.879 [2024-12-05 13:34:53.395248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.879 [2024-12-05 13:34:53.395256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.879 [2024-12-05 13:34:53.407907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.879 [2024-12-05 13:34:53.408571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.879 [2024-12-05 13:34:53.408610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.879 [2024-12-05 13:34:53.408621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.879 [2024-12-05 13:34:53.408870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.879 [2024-12-05 13:34:53.409100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.879 [2024-12-05 13:34:53.409109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.879 [2024-12-05 13:34:53.409117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.879 [2024-12-05 13:34:53.409128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.879 [2024-12-05 13:34:53.421780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.879 [2024-12-05 13:34:53.422409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.879 [2024-12-05 13:34:53.422447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.879 [2024-12-05 13:34:53.422459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.879 [2024-12-05 13:34:53.422700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.879 [2024-12-05 13:34:53.422935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.879 [2024-12-05 13:34:53.422945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.879 [2024-12-05 13:34:53.422953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.879 [2024-12-05 13:34:53.422961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:30.879 [2024-12-05 13:34:53.435614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:30.879 [2024-12-05 13:34:53.436028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:30.879 [2024-12-05 13:34:53.436049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:30.879 [2024-12-05 13:34:53.436060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:30.879 [2024-12-05 13:34:53.436282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:30.879 [2024-12-05 13:34:53.436503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:30.879 [2024-12-05 13:34:53.436512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:30.879 [2024-12-05 13:34:53.436519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:30.879 [2024-12-05 13:34:53.436526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.449587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.450139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.450156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.450164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.450385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.450605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.450613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.450625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.450632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.463497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.464111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.464148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.464161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.464401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.464626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.464635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.464643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.464651] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.477327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.477964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.478002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.478015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.478257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.478482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.478491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.478499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.478507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.491186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.491842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.491888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.491901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.492145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.492369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.492378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.492386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.492395] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.505053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.505613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.505633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.505642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.505869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.506092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.506100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.506109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.506117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.518971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.519511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.519549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.519561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.519803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.520038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.520049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.520057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.520066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.532935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.533629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.533666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.533678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.533928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.534154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.534163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.534170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.534179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.546829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.547482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.547520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.547536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.547776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.548011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.548020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.548028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.548036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.142 [2024-12-05 13:34:53.560688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.142 [2024-12-05 13:34:53.561267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.142 [2024-12-05 13:34:53.561305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.142 [2024-12-05 13:34:53.561317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.142 [2024-12-05 13:34:53.561557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.142 [2024-12-05 13:34:53.561781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.142 [2024-12-05 13:34:53.561790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.142 [2024-12-05 13:34:53.561798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.142 [2024-12-05 13:34:53.561806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 [2024-12-05 13:34:53.574677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.575371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.575409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.575420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 [2024-12-05 13:34:53.575661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 [2024-12-05 13:34:53.575892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.575902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.575910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.575918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 [2024-12-05 13:34:53.588565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.589103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.589141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.589152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 [2024-12-05 13:34:53.589392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 [2024-12-05 13:34:53.589621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.589630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.589639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.589647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.143 [2024-12-05 13:34:53.602508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.603050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.603089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.603100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 [2024-12-05 13:34:53.603340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 [2024-12-05 13:34:53.603565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.603574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.603583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.603591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 [2024-12-05 13:34:53.616461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.616774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.616798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.616807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 [2024-12-05 13:34:53.617039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 [2024-12-05 13:34:53.617261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.617278] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.617285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.617293] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 [2024-12-05 13:34:53.630366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.630925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.630963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.630976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 [2024-12-05 13:34:53.631223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 [2024-12-05 13:34:53.631454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.631463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.631471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.631479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.143 [2024-12-05 13:34:53.638943] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.143 [2024-12-05 13:34:53.644345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:31.143 [2024-12-05 13:34:53.644990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.645028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.645040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.143 [2024-12-05 13:34:53.645280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.143 [2024-12-05 13:34:53.645505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.645514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.645521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.645529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 [2024-12-05 13:34:53.658390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.658972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.658992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.659000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 [2024-12-05 13:34:53.659223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 [2024-12-05 13:34:53.659444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.659452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.659459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.659470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 [2024-12-05 13:34:53.672329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.672886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.672904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.672913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.143 [2024-12-05 13:34:53.673134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.143 [2024-12-05 13:34:53.673354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.143 [2024-12-05 13:34:53.673362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.143 [2024-12-05 13:34:53.673369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.143 [2024-12-05 13:34:53.673376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.143 Malloc0 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.143 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.143 [2024-12-05 13:34:53.686244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.143 [2024-12-05 13:34:53.686953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.143 [2024-12-05 13:34:53.686991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.143 [2024-12-05 13:34:53.687003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.144 [2024-12-05 13:34:53.687247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.144 [2024-12-05 13:34:53.687480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.144 [2024-12-05 13:34:53.687489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.144 [2024-12-05 13:34:53.687497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.144 [2024-12-05 13:34:53.687505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.144 [2024-12-05 13:34:53.700163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.144 [2024-12-05 13:34:53.700677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:31.144 [2024-12-05 13:34:53.700715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a16a0 with addr=10.0.0.2, port=4420 00:30:31.144 [2024-12-05 13:34:53.700727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a16a0 is same with the state(6) to be set 00:30:31.144 [2024-12-05 13:34:53.700981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a16a0 (9): Bad file descriptor 00:30:31.144 [2024-12-05 13:34:53.701206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:31.144 [2024-12-05 13:34:53.701215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:31.144 [2024-12-05 13:34:53.701223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:31.144 [2024-12-05 13:34:53.701231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:31.144 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:31.404 [2024-12-05 13:34:53.708814] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.404 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.404 [2024-12-05 13:34:53.714093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:31.404 13:34:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1127136 00:30:31.404 [2024-12-05 13:34:53.738017] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:32.606 4796.00 IOPS, 18.73 MiB/s [2024-12-05T12:34:56.115Z] 5578.62 IOPS, 21.79 MiB/s [2024-12-05T12:34:57.071Z] 6192.56 IOPS, 24.19 MiB/s [2024-12-05T12:34:58.454Z] 6683.60 IOPS, 26.11 MiB/s [2024-12-05T12:34:59.398Z] 7089.73 IOPS, 27.69 MiB/s [2024-12-05T12:35:00.341Z] 7422.00 IOPS, 28.99 MiB/s [2024-12-05T12:35:01.283Z] 7704.54 IOPS, 30.10 MiB/s [2024-12-05T12:35:02.246Z] 7947.36 IOPS, 31.04 MiB/s 00:30:39.678 Latency(us) 00:30:39.678 [2024-12-05T12:35:02.246Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:39.678 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:39.678 Verification LBA range: start 0x0 length 0x4000 00:30:39.678 Nvme1n1 : 15.00 8174.25 31.93 9699.12 0.00 7135.96 805.55 16384.00 00:30:39.678 [2024-12-05T12:35:02.246Z] =================================================================================================================== 00:30:39.678 [2024-12-05T12:35:02.246Z] Total : 8174.25 31.93 9699.12 0.00 7135.96 805.55 16384.00 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.678 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.678 rmmod nvme_tcp 00:30:39.984 rmmod nvme_fabrics 00:30:39.984 rmmod nvme_keyring 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1128324 ']' 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1128324 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1128324 ']' 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1128324 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1128324 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1128324' 00:30:39.984 killing process with pid 1128324 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1128324 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1128324 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.984 13:35:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.581 00:30:42.581 real 0m28.985s 00:30:42.581 user 1m3.004s 00:30:42.581 sys 0m8.257s 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:42.581 ************************************ 00:30:42.581 END TEST nvmf_bdevperf 00:30:42.581 ************************************ 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:42.581 ************************************ 00:30:42.581 START TEST nvmf_target_disconnect 00:30:42.581 ************************************ 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:42.581 * Looking for test storage... 00:30:42.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:42.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.581 --rc genhtml_branch_coverage=1 00:30:42.581 --rc genhtml_function_coverage=1 00:30:42.581 --rc genhtml_legend=1 00:30:42.581 --rc geninfo_all_blocks=1 00:30:42.581 --rc geninfo_unexecuted_blocks=1 00:30:42.581 00:30:42.581 ' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:42.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.581 --rc genhtml_branch_coverage=1 00:30:42.581 --rc genhtml_function_coverage=1 00:30:42.581 --rc genhtml_legend=1 00:30:42.581 --rc geninfo_all_blocks=1 00:30:42.581 --rc geninfo_unexecuted_blocks=1 00:30:42.581 00:30:42.581 ' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:42.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.581 --rc genhtml_branch_coverage=1 00:30:42.581 --rc genhtml_function_coverage=1 00:30:42.581 --rc genhtml_legend=1 00:30:42.581 --rc geninfo_all_blocks=1 00:30:42.581 --rc geninfo_unexecuted_blocks=1 00:30:42.581 00:30:42.581 ' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:42.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.581 --rc genhtml_branch_coverage=1 00:30:42.581 --rc genhtml_function_coverage=1 00:30:42.581 --rc genhtml_legend=1 00:30:42.581 --rc geninfo_all_blocks=1 00:30:42.581 --rc geninfo_unexecuted_blocks=1 00:30:42.581 00:30:42.581 ' 00:30:42.581 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:42.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.582 13:35:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.719 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:50.720 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:50.720 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:50.720 Found net devices under 0000:31:00.0: cvl_0_0 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:50.720 Found net devices under 0000:31:00.1: cvl_0_1 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:50.720 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:50.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:30:50.981 00:30:50.981 --- 10.0.0.2 ping statistics --- 00:30:50.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.981 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:50.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.335 ms 00:30:50.981 00:30:50.981 --- 10.0.0.1 ping statistics --- 00:30:50.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.981 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:50.981 ************************************ 00:30:50.981 START TEST nvmf_target_disconnect_tc1 00:30:50.981 ************************************ 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:50.981 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:51.242 [2024-12-05 13:35:13.575176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.242 [2024-12-05 13:35:13.575250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18aad00 with addr=10.0.0.2, port=4420 00:30:51.242 [2024-12-05 13:35:13.575284] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:51.242 [2024-12-05 13:35:13.575296] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:51.242 [2024-12-05 13:35:13.575304] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:51.242 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:51.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:51.242 Initializing NVMe Controllers 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:51.242 00:30:51.242 real 0m0.135s 00:30:51.242 user 0m0.057s 00:30:51.242 sys 0m0.077s 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:51.242 ************************************ 00:30:51.242 END TEST nvmf_target_disconnect_tc1 00:30:51.242 ************************************ 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:51.242 ************************************ 00:30:51.242 START TEST nvmf_target_disconnect_tc2 00:30:51.242 ************************************ 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1134881 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1134881 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1134881 ']' 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.242 13:35:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:51.242 [2024-12-05 13:35:13.743395] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:51.242 [2024-12-05 13:35:13.743454] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.504 [2024-12-05 13:35:13.852930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:51.504 [2024-12-05 13:35:13.905106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.504 [2024-12-05 13:35:13.905157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.504 [2024-12-05 13:35:13.905165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.504 [2024-12-05 13:35:13.905173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.504 [2024-12-05 13:35:13.905180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.504 [2024-12-05 13:35:13.907228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:51.504 [2024-12-05 13:35:13.907386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:51.504 [2024-12-05 13:35:13.907544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:51.504 [2024-12-05 13:35:13.907545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.076 Malloc0 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.076 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.076 [2024-12-05 13:35:14.638153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.337 [2024-12-05 13:35:14.678591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1135229 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:52.337 13:35:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.257 13:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1134881 00:30:54.257 13:35:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Read completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 Write completed with error (sct=0, sc=8) 00:30:54.257 starting I/O failed 00:30:54.257 [2024-12-05 13:35:16.713181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:54.257 [2024-12-05 13:35:16.713540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.713560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.714155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.714184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.714515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.714527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.714704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.714715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.714893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.714903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.715148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.715157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.715441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.715450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.715752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.715761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.715965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.715974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.716095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.716105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.716360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.257 [2024-12-05 13:35:16.716370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.257 qpair failed and we were unable to recover it. 00:30:54.257 [2024-12-05 13:35:16.716583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.716592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.716645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.716653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.716956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.716966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.717324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.717334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.717626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.717635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.717951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.717961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.718316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.718326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.718657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.718666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.718859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.718872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.719243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.719252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.719592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.719601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.719796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.719805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.720031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.720040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.720373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.720382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.720725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.720734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.721114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.721124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.721466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.721475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.721755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.721764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.721924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.721933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.722010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.722019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.722334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.722343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.722645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.722655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.722930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.722939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.725165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.725175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.725487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.725496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.725797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.725806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.726120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.726130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.726454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.726463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.726763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.726774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.727152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.727161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.727466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.727475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.727768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.727778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.728085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.728094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.728419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.728428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.728717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.728727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.729034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.729043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.729360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.729369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.258 [2024-12-05 13:35:16.729556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.258 [2024-12-05 13:35:16.729566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.258 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.729874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.729883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.730065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.730074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.730392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.730401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.730743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.730752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.731096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.731105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.731397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.731406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.731694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.731702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.731896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.731904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.732238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.732246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.732444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.732452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.732748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.732756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.733082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.733090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.733423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.733432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.733731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.733740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.734036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.734045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.734357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.734366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.734654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.734662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.734975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.734985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.735282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.735290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.735573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.735580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.735812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.735820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.736128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.736137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.736429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.736437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.736736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.736745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.737019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.737028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.737412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.737420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.737731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.737740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.738061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.738069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.738226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.738235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.738548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.738557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.738858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.738871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.739160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.739168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.739472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.739481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.739636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.739644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.739948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.739956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.740227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.740235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.740539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.740549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.259 [2024-12-05 13:35:16.740924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.259 [2024-12-05 13:35:16.740933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.259 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.741216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.741224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.741499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.741507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.741787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.741795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.742127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.742135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.742424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.742433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.742734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.742743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.743033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.743041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.743340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.743348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.743635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.743644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.743938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.743947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.744259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.744267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.744638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.744646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.744933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.744941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.745127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.745136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.745428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.745436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.745739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.745747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.745914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.745922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.746212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.746220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.746439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.746447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.746733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.746740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.746980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.746988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.747293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.747303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.747614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.747623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.747910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.747918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.748281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.748290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.748450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.748460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.748753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.748761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.749065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.749074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.749346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.749354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.749646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.749655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.749961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.749969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.750291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.750300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.750604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.750614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.750922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.750931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.751269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.751277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.751580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.751589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.260 qpair failed and we were unable to recover it. 00:30:54.260 [2024-12-05 13:35:16.751851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.260 [2024-12-05 13:35:16.751859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.752239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.752248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.752533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.752541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.752854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.752864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.753193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.753202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.753491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.753499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.753733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.753740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.754033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.754042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.754212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.754220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.754518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.754526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.754833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.754842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.755137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.755146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.755454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.755463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.755735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.755744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.756053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.756062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.756374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.756383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.756663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.756671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.756988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.756997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.757353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.757361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.757657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.757664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.757989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.757997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.758294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.758303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.758612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.758620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.758972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.758981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.759298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.759306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.759592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.759600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.759932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.759942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.760251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.760259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.760561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.760570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.760904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.760912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.761112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.761120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.761414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.761422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.761707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.761714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.762011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.762019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.762354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.261 [2024-12-05 13:35:16.762363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.261 qpair failed and we were unable to recover it. 00:30:54.261 [2024-12-05 13:35:16.762652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.762660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.762886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.762897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.763218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.763227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.763518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.763527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.763825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.763835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.764150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.764159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.764459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.764468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.764771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.764780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.765068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.765077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.765363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.765372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.765679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.765688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.766000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.766008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.766342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.766360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.766644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.766652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.766961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.766971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.767276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.767284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.767475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.767482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.767752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.767760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.768074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.768083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.768355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.768364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.768695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.768704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.768994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.769002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.769317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.769326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.769656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.769664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.769969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.769978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.770258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.770266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.770555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.770563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.770858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.770871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.771145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.771153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.771446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.771454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.771638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.771646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.771943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.771952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.772301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.772309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.772476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.772484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.772800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.772809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.773126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.262 [2024-12-05 13:35:16.773135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.262 qpair failed and we were unable to recover it. 00:30:54.262 [2024-12-05 13:35:16.773425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.773433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.773709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.773718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.774019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.774027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.774348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.774357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.774656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.774665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.774850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.774860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.775207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.775215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.775493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.775501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.775669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.775678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.775894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.775902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.776067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.776077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.776405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.776413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.776708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.776717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.777028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.777036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.777332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.777341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.777627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.777636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.777933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.777942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.778265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.778273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.778560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.778569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.778879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.778888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.779190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.779199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.779490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.779499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.779813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.779822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.780002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.780012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.780307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.780315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.780597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.780605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.780932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.780940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.781247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.781256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.781417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.781426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.781718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.781727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.782022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.782031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.782350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.782359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.782651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.782660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.783004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.783013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.783411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.783419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.783753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.783762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.784082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.784090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.784395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.263 [2024-12-05 13:35:16.784404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.263 qpair failed and we were unable to recover it. 00:30:54.263 [2024-12-05 13:35:16.784712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.784721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.785011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.785020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.785327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.785335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.785627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.785636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.785936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.785944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.786274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.786283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.786608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.786616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.786897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.786907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.787197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.787206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.787483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.787491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.787780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.787788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.788099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.788109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.788440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.788449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.788777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.788786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.789115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.789124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.789465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.789474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.789768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.789777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.789952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.789961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.790296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.790305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.790599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.790608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.790955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.790963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.791272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.791281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.791575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.791583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.791738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.791747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.792046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.792055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.792362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.792370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.792681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.792689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.792995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.793005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.793243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.793251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.793559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.793567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.793860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.793871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.794162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.794171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.794474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.794482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.794781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.794789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.795099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.795108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.795416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.795425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.795759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.795767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.264 [2024-12-05 13:35:16.796095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.264 [2024-12-05 13:35:16.796105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.264 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.796411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.796419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.796707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.796725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.797010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.797018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.797378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.797387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.797679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.797688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.798000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.798008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.798341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.798350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.798683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.798691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.798982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.798991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.799317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.799326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.799485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.799493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.799788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.799796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.800106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.800114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.800439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.800447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.800744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.800752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.800936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.800945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.801272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.801280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.801576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.801584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.801782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.801790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.802081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.802090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.802373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.802381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.802687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.802695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.802982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.802990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.803177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.803185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.803450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.803458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.803747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.803756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.804036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.804046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.804368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.804376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.804706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.804715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.804922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.804931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.805233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.805241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.805556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.805565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.805895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.805904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.806239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.806247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.806578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.806587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.806933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.806942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.807262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.265 [2024-12-05 13:35:16.807271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.265 qpair failed and we were unable to recover it. 00:30:54.265 [2024-12-05 13:35:16.807599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.807608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.807905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.807913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.808250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.808258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.808551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.808560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.808751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.808760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.809039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.809048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.809380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.809388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.809568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.809576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.809856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.809867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.810193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.810202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.810501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.810509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.810833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.810842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.811203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.811214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.811552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.811560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.811868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.811877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.812189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.812197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.812485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.812492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.812803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.812811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.813059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.813068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.813355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.813364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.813680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.813689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.814018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.814027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.814340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.814349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.814664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.814672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.814911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.814919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.815264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.815273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.815590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.815599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.815890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.815900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.816177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.816185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.816513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.816530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.266 [2024-12-05 13:35:16.816866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.266 [2024-12-05 13:35:16.816874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.266 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.817186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.817196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.817506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.817514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.817683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.817692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.818046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.818055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.818333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.818341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.818676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.818684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.819014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.819023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.819337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.819345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.819652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.819661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.819942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.819951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.820263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.820273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.820606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.820614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.820941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.820950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.821258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.821267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.821563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.821571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.821880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.821889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.822181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.822190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.541 [2024-12-05 13:35:16.822502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.541 [2024-12-05 13:35:16.822511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.541 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.822798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.822807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.823120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.823129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.823419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.823428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.823729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.823740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.824121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.824130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.824432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.824441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.824732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.824741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.825033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.825042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.825405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.825414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.825709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.825718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.826031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.826040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.826376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.826384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.826726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.826736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.827034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.827042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.827347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.827356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.827632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.827641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.827826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.827835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.828148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.828157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.828461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.828468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.828771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.828780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.828993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.829003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.829307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.829316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.829592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.829602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.829920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.829928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.830200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.830208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.830522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.830530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.830855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.830866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.831143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.831151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.831506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.831513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.831804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.831812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.832032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.832303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.832310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.832650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.832658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.832953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.832961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.833271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.833280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.833458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.833466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.833653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.833662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.542 [2024-12-05 13:35:16.833943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.542 [2024-12-05 13:35:16.833951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.542 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.834300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.834308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.834639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.834647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.834956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.834966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.835244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.835252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.835595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.835604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.835838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.835848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.836188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.836198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.836484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.836492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.836654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.836663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.836963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.836972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.837282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.837289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.837485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.837493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.837778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.837786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.838037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.838045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.838367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.838376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.838682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.838690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.839022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.839031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.839339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.839347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.839652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.839661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.839989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.839998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.840246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.840253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.840579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.840587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.840877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.840885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.841071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.841079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.841267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.841274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.841604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.841613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.841918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.841927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.842247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.842256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.842414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.842422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.842732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.842741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.843080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.843089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.843380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.843389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.843692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.843700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.843995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.844003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.844181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.844190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.844508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.844517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.844843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.543 [2024-12-05 13:35:16.844851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.543 qpair failed and we were unable to recover it. 00:30:54.543 [2024-12-05 13:35:16.845149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.845159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.845460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.845468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.845799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.845808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.846113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.846122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.846419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.846426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.846744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.846752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.847042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.847050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.847368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.847376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.847653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.847663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.847988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.847996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.848312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.848321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.848653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.848662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.848967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.848975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.849249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.849256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.849576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.849584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.849884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.849893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.850209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.850218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.850504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.850512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.850812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.850820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.851090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.851098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.851406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.851415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.851604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.851612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.851923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.851931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.852249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.852258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.852618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.852626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.852925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.852933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.853263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.853271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.853420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.853429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.853646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.853654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.853855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.853866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.854143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.854151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.854466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.854474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.854806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.854815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.855087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.855095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.855418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.855427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.855718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.855728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.855899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.855908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.856205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.856213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.856518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.856527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.856820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.856828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.857139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.544 [2024-12-05 13:35:16.857149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.544 qpair failed and we were unable to recover it. 00:30:54.544 [2024-12-05 13:35:16.857435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.857443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.857734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.857742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.858076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.858085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.858370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.858378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.858694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.858703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.859009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.859018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.859309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.859317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.859606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.859616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.859819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.859827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.860015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.860023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.860295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.860303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.860476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.860485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.860772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.860780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.861065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.861073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.861401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.861410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.861644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.861652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.861971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.861980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.862304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.862312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.862629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.862638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.862965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.862974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.863287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.863296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.863624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.863633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.863980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.863989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.864321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.864329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.864613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.864621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.864951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.864960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.865140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.865147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.865318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.865326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.865629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.865637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.865892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.865901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.866206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.866214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.866515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.866523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.866837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.866845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.867035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.867044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.867350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.867360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.545 [2024-12-05 13:35:16.867664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.545 [2024-12-05 13:35:16.867673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.545 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.867975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.867984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.868270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.868277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.868589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.868598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.868924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.868933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.869264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.869272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.869467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.869474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.869775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.869784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.870086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.870094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.870406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.870415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.870748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.870757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.871046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.871056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.871231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.871241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.871443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.871452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.871746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.871754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.872076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.872085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.872370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.872379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.872710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.872719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.872879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.872889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.873214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.873223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.873516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.873525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.873833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.873843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.874165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.874174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.874462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.874471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.874653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.874662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.874997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.875006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.875303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.875313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.875622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.875631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.875798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.875807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.876093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.876101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.876410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.876418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.876706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.876714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.877010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.877018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.877292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.877300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.877608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.877617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.877941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.877950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.878275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.878284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.878609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.878960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.878969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.879240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.879250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.879535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.879542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.546 [2024-12-05 13:35:16.879916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.546 [2024-12-05 13:35:16.879925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.546 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.880202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.880210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.880504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.880512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.880838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.880847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.881135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.881143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.881485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.881493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.881861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.881871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.882179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.882188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.882511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.882520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.882804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.882813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.883123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.883132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.883420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.883428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.883706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.883715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.884076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.884085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.884465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.884473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.884799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.884808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.885115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.885123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.885410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.885418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.885712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.885720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.886028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.886037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.886370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.886378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.886668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.886676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.886982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.886991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.887280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.887288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.887583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.887591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.887905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.887914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.887990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.887998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.888176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.888184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.888496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.888504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.888660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.888669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.888950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.888958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.889276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.889284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.889568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.889576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.889866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.889875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.890178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.890186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.890457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.890464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.890783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.890791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.891091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.891099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.891392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.891403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.891688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.891695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.891872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.891881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.892190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.892198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.892490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.892498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.892802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.892811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.893107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.547 [2024-12-05 13:35:16.893115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.547 qpair failed and we were unable to recover it. 00:30:54.547 [2024-12-05 13:35:16.893407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.893417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.893724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.893733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.894130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.894139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.894457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.894466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.894766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.894774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.895064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.895072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.895414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.895422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.895749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.895758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.896031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.896040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.896348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.896357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.896667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.896676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.897004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.897013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.897328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.897336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.897640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.897649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.897929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.897937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.898249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.898257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.898566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.898575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.898923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.898931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.899237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.899246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.899560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.899568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.899907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.899916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.900215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.900224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.900530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.900539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.900841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.900849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.901178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.901187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.901382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.901391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.901697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.901705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.902024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.902032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.902393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.902401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.902690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.902698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.902995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.903003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.903316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.903325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.903661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.903670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.903958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.903968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.904274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.904283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.904629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.904637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.904833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.904840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.905153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.905162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.905470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.905479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.905790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.905798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.906114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.906124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.906447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.548 [2024-12-05 13:35:16.906455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.548 qpair failed and we were unable to recover it. 00:30:54.548 [2024-12-05 13:35:16.906624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.906631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.906952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.906961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.907129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.907467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.907475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.907782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.907791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.908103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.908111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.908326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.908334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.908610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.908618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.908906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.908914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.909229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.909237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.909514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.909522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.909822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.909830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.910166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.910175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.910482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.910489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.910775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.910784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.911090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.911099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.911483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.911491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.911687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.911694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.911955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.911964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.912280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.912289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.912573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.912582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.912865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.912874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.913155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.913163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.913457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.913466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.913759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.913768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.914047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.914055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.914346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.914354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.914631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.914639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.914954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.914963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.915297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.915306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.915631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.915639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.915975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.915985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.916277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.916286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.916590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.916599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.916786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.916795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.917131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.917139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.917435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.917444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.917739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.917748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.918030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.918038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.918342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.918351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.549 [2024-12-05 13:35:16.918631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.549 [2024-12-05 13:35:16.918639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.549 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.918950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.918959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.919283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.919292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.919414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.919422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.919615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d37030 is same with the state(6) to be set 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Read completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 Write completed with error (sct=0, sc=8) 00:30:54.550 starting I/O failed 00:30:54.550 [2024-12-05 13:35:16.920077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:54.550 [2024-12-05 13:35:16.920406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.920424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.920826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.920839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.921278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.921318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.921641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.921655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.921874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.921888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.922210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.922249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.922585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.922599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.923031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.923070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.923405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.923415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.923731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.923739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.924033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.924041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.924362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.924371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.924718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.924726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.924942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.924950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.925277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.925285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.925590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.925599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.925779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.925788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.926184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.926192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.926502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.926510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.926809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.926817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.927123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.927133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.927467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.927476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.927766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.927773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.928089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.928097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.928411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.928419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.928619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.928628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.928952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.928960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.929138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.929146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.929486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.550 [2024-12-05 13:35:16.929494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.550 qpair failed and we were unable to recover it. 00:30:54.550 [2024-12-05 13:35:16.929829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.929837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.930201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.930211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.930512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.930520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.930846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.930855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.931175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.931185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.931462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.931470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.931760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.931768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.932046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.932055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.932350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.932358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.932689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.932697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.932984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.932992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.933215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.933223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.933542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.933550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.933906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.933914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.934209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.934217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.934525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.934533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.934846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.934855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.935184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.935193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.935504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.935514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.935806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.935816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.936138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.936147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.936473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.936482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.936792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.936801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.937088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.937098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.937386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.937395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.937699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.937708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.937888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.937898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.938144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.938153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.938501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.938509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.938822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.938831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.939140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.939150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.939451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.939459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.939778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.939786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.940000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.940008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.940161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.940169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.940460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.940469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.940764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.940772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.941071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.941079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.941409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.941416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.941746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.941754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.942079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.942087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.942361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.942369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.942683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.942691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.943013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.551 [2024-12-05 13:35:16.943022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.551 qpair failed and we were unable to recover it. 00:30:54.551 [2024-12-05 13:35:16.943359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.943368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.943645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.943653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.943987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.943995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.944274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.944282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.944466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.944474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.944784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.944791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.945128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.945136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.945481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.945490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.945796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.945804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.946108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.946116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.946411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.946420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.946749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.946757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.947032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.947040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.947330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.947338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.947642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.947651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.947957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.947965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.948242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.948250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.948420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.948429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.948759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.948768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.949074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.949083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.949286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.949294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.949596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.949605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.949894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.949902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.950200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.950209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.950498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.950506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.950795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.950803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.951116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.951124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.951418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.951427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.951716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.951725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.952031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.952041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.952229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.952237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.952510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.952518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.952721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.952730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.953046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.953054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.953340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.953348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.953633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.953641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.953934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.953942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.954252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.954261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.954538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.954546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.954873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.954881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.954916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.954925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.955196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.955204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.955502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.955511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.955799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.955808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.956013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.956022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.956335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.956343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.956669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.956677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.552 [2024-12-05 13:35:16.956836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.552 [2024-12-05 13:35:16.956845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.552 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.957147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.957157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.957480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.957488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.957690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.957698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.958007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.958016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.958354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.958362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.958665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.958673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.958976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.958985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.959314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.959322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.959612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.959620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.959926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.959935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.960094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.960103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.960414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.960422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.960694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.960702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.960880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.960890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.961196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.961204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.961511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.961521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.961806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.961815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.962118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.962135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.962446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.962453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.962759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.962768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.963075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.963084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.963387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.963396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.963685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.963693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.964019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.964029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.964351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.964360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.964659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.964668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.965002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.965010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.965319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.965329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.965522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.965531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.965747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.965754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.966081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.966090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.966420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.966430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.966721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.966730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.967071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.967079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.967365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.967374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.967662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.967670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.967975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.967984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.968322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.968331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.968695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.968704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.969005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.969013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.969291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.969300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.969593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.969601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.969935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.553 [2024-12-05 13:35:16.969943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.553 qpair failed and we were unable to recover it. 00:30:54.553 [2024-12-05 13:35:16.970247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.970256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.970560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.970570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.970768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.970777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.971082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.971093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.971425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.971433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.971772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.971781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.972066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.972075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.972363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.972372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.972675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.972684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.972972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.972980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.973304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.973312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.973620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.973629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.973969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.974260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.974268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.974477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.974486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.974785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.974793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.974969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.974977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.975262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.975271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.975564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.975573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.975859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.975869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.976144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.976152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.976443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.976452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.976741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.976750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.977032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.977041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.977368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.977377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.977666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.977675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.977983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.977992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.978304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.978312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.978471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.978480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.978766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.978776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.979074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.979083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.979369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.979379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.979651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.979660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.979978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.979987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.980293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.980301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.980580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.980589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.980872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.980881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.981184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.981193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.981449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.981458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.981626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.981634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.981978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.981988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.982344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.982354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.982646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.982656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.982935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.982944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.983269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.983278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.983574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.983583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.983870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.554 [2024-12-05 13:35:16.983878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.554 qpair failed and we were unable to recover it. 00:30:54.554 [2024-12-05 13:35:16.984158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.984167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.984469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.984477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.984772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.984780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.985080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.985088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.985375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.985383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.985670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.985678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.986023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.986031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.986334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.986343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.986665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.986674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.986981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.986990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.987174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.987183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.987471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.987479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.987787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.987796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.988093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.988103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.988291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.988301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.988528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.988537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.988828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.988837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.989142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.989150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.989450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.989459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.989770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.989779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.990039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.990048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.990372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.990380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.990689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.990698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.990852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.990864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.991163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.991172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.991350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.991358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.991630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.991638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.991958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.991966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.992256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.992265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.992454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.992463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.992649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.992657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.992952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.992961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.993124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.993134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.993357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.993366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.993632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.993641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.993929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.993938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.994218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.994226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.994513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.994523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.994800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.994809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.995120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.995130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.995415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.995423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.995711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.995719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.995925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.995934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.996268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.996276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.996638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.996646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.996976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.996984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.997316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.555 [2024-12-05 13:35:16.997325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.555 qpair failed and we were unable to recover it. 00:30:54.555 [2024-12-05 13:35:16.997488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.997498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:16.997822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.997831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:16.998180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.998190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:16.998515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.998525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:16.998851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.998860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:16.999190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.999200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:16.999507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.999516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:16.999816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:16.999825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.000126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.000135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.000418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.000427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.000684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.000692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.000980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.000988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.001300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.001310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.001617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.001626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.001833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.001840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.002150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.002159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.002502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.002510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.002839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.002847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.003138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.003148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.003339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.003347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.003681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.003689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.004028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.004036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.004361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.004370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.004658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.004666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.004953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.004961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.005282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.005290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.005458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.005465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.005776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.005785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.006024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.006032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.006239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.006249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.006554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.006562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.006898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.006907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.007222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.007230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.007518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.007525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.007832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.007841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.008122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.008131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.008417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.008425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.008707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.008716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.009052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.009060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.009352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.009359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.009665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.009673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.009981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.009990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.010315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.010326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.010628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.010636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.010825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.010834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.011110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.011120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.011467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.011476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.011776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.011784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.012083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.012091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.012401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.012409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.012703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.556 [2024-12-05 13:35:17.012712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.556 qpair failed and we were unable to recover it. 00:30:54.556 [2024-12-05 13:35:17.013000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.013009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.013187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.013195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.013517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.013526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.013722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.013731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.014059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.014068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.014397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.014406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.014693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.014702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.014883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.014892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.015176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.015185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.015481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.015635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.015645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.015915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.015923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.016139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.016146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.016445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.016454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.016719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.016729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.017027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.017037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.017342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.017359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.017647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.017656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.017932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.017941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.018299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.018308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.018602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.018610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.018903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.018912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.019238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.019247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.019526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.019533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.019837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.019845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.020120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.020129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.020418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.020426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.020611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.020620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.020795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.020803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.021124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.021133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.021492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.021502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.021801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.021812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.022101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.022110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.022400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.022408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.022715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.022723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.023026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.023035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.023347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.023356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.023551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.023558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.023846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.023855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.024043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.024051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.024364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.024373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.024680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.024689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.024979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.024988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.025147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.025156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.025494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.025503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.025790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.025799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.557 qpair failed and we were unable to recover it. 00:30:54.557 [2024-12-05 13:35:17.025952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.557 [2024-12-05 13:35:17.025960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.026122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.026131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.026316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.026325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.026630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.026640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.026847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.026857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.027155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.027165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.027472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.027481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.027752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.027761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.028071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.028081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.028386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.028395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.028687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.028697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.028986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.028995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.029304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.029314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.029605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.029614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.029789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.029799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.030012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.030022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.030355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.030363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.030697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.030707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.031012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.031021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.031185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.031193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.031468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.031477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.031788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.031796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.031975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.031984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.032281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.032289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.032498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.032507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.032814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.032825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.033132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.033140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.033440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.033448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.033617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.033626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.033913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.033921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.034230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.034240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.034423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.034433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.034736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.034746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.035028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.035037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.035332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.035341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.035495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.035504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.035887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.035896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.036193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.036201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.036385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.036396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.036699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.036708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.037042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.037051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.037339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.037348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.037651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.037660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.037929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.037938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.038244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.038252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.038560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.038568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.038868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.038877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.039170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.039179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.039485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.039494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.039817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.558 [2024-12-05 13:35:17.039827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.558 qpair failed and we were unable to recover it. 00:30:54.558 [2024-12-05 13:35:17.040115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.040123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.040436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.040445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.040717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.040728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.041051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.041061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.041375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.041383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.041688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.041696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.042011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.042019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.042348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.042357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.042632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.042640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.042959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.042969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.043310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.043318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.043642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.043651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.043938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.043947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.044264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.044275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.044629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.044638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.044896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.044907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.045218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.045229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.045514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.045522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.045804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.045812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.046104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.046113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.046396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.046405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.046691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.046700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.046856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.046872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.047191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.047200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.047488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.047496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.047782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.047790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.048091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.048099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.048379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.048387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.048693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.048702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.048859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.048870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.049157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.049167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.049491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.049501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.049798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.049807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.050133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.050142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.050498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.050507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.050792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.050800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.051088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.051098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.051404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.051413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.051708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.051717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.052019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.052028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.052331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.052339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.052647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.052657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.052966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.052975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.053286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.053295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.053587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.053595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.053891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.053899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.054270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.054278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.054612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.054620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.054952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.054961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.055270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.055278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.055582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.055592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.055878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.055886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.056187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.056197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.056480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.559 [2024-12-05 13:35:17.056489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.559 qpair failed and we were unable to recover it. 00:30:54.559 [2024-12-05 13:35:17.056770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.056779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.057105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.057115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.057395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.057404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.057726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.057735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.057912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.057921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.058251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.058259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.058555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.058563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.058877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.058887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.059174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.059183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.059474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.059482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.059794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.059802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.060168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.060177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.060498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.060507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.060829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.060837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.060989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.060998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.061294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.061302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.061496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.061504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.061777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.061787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.062104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.062113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.062420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.062430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.062755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.062764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.063080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.063090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.063398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.063407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.063640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.063648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.063824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.063834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.064116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.064124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.064448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.064457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.064785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.064794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.065102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.065111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.065387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.065396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.065743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.065752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.066029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.066039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.066344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.066353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.066662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.066671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.066976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.066985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.067269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.067277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.067605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.067614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.067922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.067930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.068288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.068297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.068585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.068592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.068898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.068906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.069204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.069215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.069530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.069538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.069846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.069855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.070153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.070162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.070504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.070512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.070815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.070823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.070902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.070911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.560 [2024-12-05 13:35:17.071180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.560 [2024-12-05 13:35:17.071188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.560 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.071479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.071488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.071790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.071799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.072084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.072093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.072381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.072390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.072697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.072706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.073049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.073057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.073345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.073353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.073688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.073696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.074018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.074027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.074342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.074350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.074651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.074660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.074920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.074928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.075212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.075220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.075540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.075549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.075829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.075837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.076144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.076152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.076476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.076485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.076783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.076791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.077098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.077108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.077413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.077422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.077706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.077716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.078022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.078031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.078348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.078356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.078688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.078696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.079003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.079020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.079344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.079352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.079711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.079719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.080005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.080014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.080321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.080329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.080619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.080626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.080917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.080926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.081211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.081219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.081548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.081558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.081892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.081902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.082084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.082093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.082371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.082379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.082665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.082674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.082985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.082993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.083318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.083328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.083653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.083661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.083839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.083847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.084141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.084150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.084459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.084467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.084774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.084782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.085071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.085080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.085316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.085324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.085652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.085661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.085938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.085946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.086271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.086280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.086606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.086614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.086901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.086910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.087236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.087244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.087550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.087558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.561 qpair failed and we were unable to recover it. 00:30:54.561 [2024-12-05 13:35:17.087850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.561 [2024-12-05 13:35:17.087858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.088161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.088171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.088441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.088449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.088749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.088756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.089068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.089076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.089381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.089390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.089682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.089690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.090014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.090022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.090197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.090205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.090422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.090430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.090712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.090721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.091028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.091037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.091371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.091380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.091680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.091689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.091902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.091910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.092231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.092239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.092520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.092528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.562 [2024-12-05 13:35:17.092839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.562 [2024-12-05 13:35:17.092848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.562 qpair failed and we were unable to recover it. 00:30:54.838 [2024-12-05 13:35:17.093151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.838 [2024-12-05 13:35:17.093161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.838 qpair failed and we were unable to recover it. 00:30:54.838 [2024-12-05 13:35:17.093437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.838 [2024-12-05 13:35:17.093449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.838 qpair failed and we were unable to recover it. 00:30:54.838 [2024-12-05 13:35:17.093759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.838 [2024-12-05 13:35:17.093768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.838 qpair failed and we were unable to recover it. 00:30:54.838 [2024-12-05 13:35:17.094080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.838 [2024-12-05 13:35:17.094090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.838 qpair failed and we were unable to recover it. 00:30:54.838 [2024-12-05 13:35:17.094242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.838 [2024-12-05 13:35:17.094252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.838 qpair failed and we were unable to recover it. 00:30:54.838 [2024-12-05 13:35:17.094433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.094443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.094815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.094825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.095162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.095171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.095497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.095505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.095785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.095793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.096126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.096136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.096446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.096456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.096779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.096789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.097079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.097089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.097392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.097401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.097706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.097715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.097912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.097921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.098108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.098117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.098286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.098295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.098634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.098646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.098804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.098811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.099153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.099163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.099449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.099458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.099762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.099771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.100032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.100041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.100366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.100375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.100689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.100697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.101001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.101011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.101330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.101340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.101644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.101653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.101975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.101984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.102912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.102930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.103231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.103250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.103626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.103635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.103825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.103834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.104100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.104108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.104460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.104469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.104781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.104790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.105071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.105080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.105376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.105384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.105708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.105716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.106026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.839 [2024-12-05 13:35:17.106037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.839 qpair failed and we were unable to recover it. 00:30:54.839 [2024-12-05 13:35:17.106364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.106373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.106642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.106651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.106961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.106969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.107277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.107285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.107615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.107623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.107971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.107979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.108147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.108156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.108485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.108493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.108709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.108717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.109028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.109036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.109393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.109401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.109697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.109707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.109882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.109890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.110222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.110230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.110543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.110551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.110880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.110890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.111223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.111232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.111539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.111548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.111853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.111863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.112164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.112173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.112368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.112376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.112675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.112684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.112965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.112974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.113308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.113316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.113623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.113631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.113913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.113922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.114262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.114270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.114582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.114591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.114899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.114907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.115232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.115240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.115545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.115553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.115842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.115851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.116130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.116139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.116444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.116453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.116742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.116751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.117029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.117037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.117350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.117359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.840 [2024-12-05 13:35:17.117666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.840 [2024-12-05 13:35:17.117675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.840 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.117981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.117990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.118312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.118323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.118503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.118513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.118847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.118857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.119245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.119254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.119546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.119555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.119864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.119873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.120066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.120074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.120397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.120406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.120721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.120729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.121029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.121037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.121360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.121368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.121646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.121654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.121984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.121993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.122331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.122340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.122598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.122606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.122801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.122809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.123110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.123119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.123502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.123511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.123730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.123738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.124077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.124086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.124403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.124412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.124713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.124721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.125131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.125161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.125372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.125382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.125708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.125717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.126022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.126031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.126344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.126352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.126564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.126573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.126933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.126943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.127286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.127295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.127615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.127624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.127956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.127964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.128144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.128152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.128465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.128474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.128803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.128812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.129111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.129120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.841 [2024-12-05 13:35:17.129425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.841 [2024-12-05 13:35:17.129433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.841 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.129758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.129766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.130080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.130089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.130356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.130364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.130680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.130985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.130994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.131156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.131164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.131448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.131456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.131783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.131791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.132088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.132097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.132272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.132280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.132559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.132567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.132895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.132904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.133206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.133214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.133514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.133523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.133830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.133839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.134059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.134067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.134386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.134395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.134704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.134714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.135022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.135031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.135352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.135360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.135719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.135727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.136031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.136040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.136374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.136384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.136687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.136704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.137014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.137022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.137387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.137395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.137723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.137732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.138039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.138047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.138370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.138379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.138676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.138684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.138877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.138886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.139185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.139193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.139342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.139350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.139684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.139692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.139866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.139874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.139942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.139949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.140215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.140223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.842 [2024-12-05 13:35:17.140401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.842 [2024-12-05 13:35:17.140410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.842 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.140706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.140715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.141032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.141041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.141333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.141342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.141542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.141550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.141734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.141742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.142064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.142074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.142388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.142691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.142699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.143007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.143016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.143329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.143337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.143644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.143653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.143964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.143972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.144142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.144151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.144330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.144339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.144642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.144651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.144931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.144940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.145228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.145236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.145426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.145434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.145792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.145801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.146101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.146109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.146434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.146443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.146712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.146721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.147032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.147041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.147337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.147346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.147686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.147694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.148003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.148012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.148340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.148348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.148660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.148668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.148941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.148950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.149287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.149295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.149606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.149614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.149929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.149938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.843 qpair failed and we were unable to recover it. 00:30:54.843 [2024-12-05 13:35:17.150214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.843 [2024-12-05 13:35:17.150223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.150543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.150551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.150853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.150866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.151167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.151175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.151329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.151338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.151644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.151654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.151974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.151983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.152299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.152308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.152603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.152612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.152906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.152915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.153197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.153205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.153510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.153519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.153848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.153856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.154047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.154055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.154372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.154380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.154676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.154684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.154996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.155005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.155330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.155339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.155637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.155646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.155968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.155977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.156283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.156292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.156579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.156587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.156895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.156904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.157227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.157235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.157560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.157569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.157874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.157883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.158166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.158174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.158487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.158496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.158841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.158850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.159159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.159168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.159454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.159462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.159806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.159814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.160115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.160124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.160410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.160419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.160725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.160734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.161074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.161083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.161365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.161373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.161678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.844 [2024-12-05 13:35:17.161686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.844 qpair failed and we were unable to recover it. 00:30:54.844 [2024-12-05 13:35:17.161992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.162002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.162198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.162207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.162398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.162407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.162715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.162723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.162933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.162941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.163225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.163233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.163541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.163550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.163845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.163854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.164139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.164147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.164451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.164459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.164754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.164762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.164944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.164953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.165238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.165246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.165552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.165560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.165863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.165872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.166063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.166071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.166382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.166390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.166696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.166705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.166983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.166992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.167296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.167305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.167615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.167624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.167935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.167945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.168284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.168294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.168413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.168423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.168706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.168715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.169021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.169031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.169319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.169328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.169664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.169672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.169981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.169990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.170318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.170326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.170690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.170698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.171016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.171024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.171337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.171345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.171616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.171623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.171938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.171946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.172133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.172141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.172441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.172449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.172760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.845 [2024-12-05 13:35:17.172768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.845 qpair failed and we were unable to recover it. 00:30:54.845 [2024-12-05 13:35:17.173124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.173132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.173434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.173442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.173755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.173763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.173990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.173998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.174320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.174330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.174635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.174644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.174970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.174978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.175283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.175292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.175559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.175567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.175930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.175939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.176235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.176243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.176560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.176569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.176895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.176903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.177107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.177116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.177438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.177446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.177778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.177786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.178009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.178017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.178350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.178359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.178683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.178691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.179000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.179008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.179323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.179331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.179624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.179633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.179940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.179948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.180254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.180262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.180548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.180556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.180860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.180876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.181179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.181187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.181491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.181500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.181835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.181843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.182143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.182153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.182458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.182466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.182799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.182808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.183096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.183105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.183430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.183438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.183745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.183753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.184040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.184048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.184368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.184376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.184678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.846 [2024-12-05 13:35:17.184687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.846 qpair failed and we were unable to recover it. 00:30:54.846 [2024-12-05 13:35:17.184996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.185005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.185345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.185353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.185665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.185673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.185985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.185993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.186326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.186334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.186648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.186656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.186958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.186968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.187294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.187303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.187616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.187624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.187902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.187910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.188230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.188238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.188546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.188555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.188877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.188885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.189200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.189210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.189519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.189527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.189842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.189850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.190164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.190172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.190487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.190496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.190752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.190760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.191080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.191088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.191401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.191410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.191720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.191730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.192021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.192029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.192337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.192345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.192653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.192661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.192948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.192956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.193263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.193271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.193579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.193588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.193784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.193792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.194187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.194196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.194506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.194515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.194804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.194812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.195123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.195131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.195438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.195447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.195744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.195752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.847 qpair failed and we were unable to recover it. 00:30:54.847 [2024-12-05 13:35:17.195940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.847 [2024-12-05 13:35:17.195948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.196265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.196273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.196560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.196568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.196874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.196883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.197187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.197197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.197483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.197491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.197778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.197786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.198097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.198105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.198396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.198405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.198709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.198717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.199022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.199030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.199350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.199361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.199664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.199672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.199983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.199991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.200303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.200311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.200636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.200644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.200954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.200962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.201297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.201306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.201607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.201616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.201930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.201938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.202262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.202271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.202578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.202586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.202884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.202893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.203236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.203244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.203440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.203448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.203750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.203758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.204086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.204096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.204399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.204407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.204712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.204721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.205039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.205047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.205354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.205363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.205672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.205680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.205995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.206003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.206318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.206327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.206670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.206679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.207007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.207015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.848 [2024-12-05 13:35:17.207178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.848 [2024-12-05 13:35:17.207186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.848 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.207512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.207529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.207818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.207826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.208133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.208143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.208451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.208459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.208763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.208772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.209053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.209062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.209368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.209376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.209719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.209728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.209901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.209909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.210187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.210196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.210479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.210487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.210767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.210775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.211090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.211098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.211386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.211395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.211572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.211582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.211886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.211894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.212227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.212236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.212424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.212432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.213160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.213176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.213346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.213355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.213660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.213671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.213981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.213991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.214330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.214339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.214647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.214656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.214962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.214972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.215306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.215316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.215619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.215628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.215935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.215945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.216273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.216282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.216590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.216599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.216922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.216931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.217237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.217248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.217427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.217435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.217743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.217751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.218059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.218068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.218373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.218382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.218572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.218581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.849 [2024-12-05 13:35:17.218919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.849 [2024-12-05 13:35:17.218928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.849 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.219223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.219231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.219510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.219518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.219841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.219849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.220161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.220170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.220502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.220510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.220848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.220857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.221164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.221172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.221497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.221830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.221840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.222144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.222153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.222464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.222474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.222810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.222819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.222979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.222989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.223313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.223321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.223476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.223485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.223769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.223778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.224087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.224098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.224439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.224448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.224755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.224764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.225033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.225043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.225359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.225369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.225676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.225685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.225998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.226007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.226331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.226341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.226641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.226649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.227292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.227311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.227642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.227652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.227938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.227948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.228245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.228255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.228565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.228574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.228800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.228808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.229103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.229113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.229301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.229310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.229641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.229650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.229837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.229845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.230144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.230153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.230463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.230471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.850 [2024-12-05 13:35:17.230790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.850 [2024-12-05 13:35:17.230798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.850 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.230917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.230925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.231221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.231230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.231542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.231550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.231919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.231928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.232228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.232236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.232503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.232513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.232820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.232830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.233108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.233118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.233422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.233431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.233757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.233766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.234048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.234057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.234363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.234372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.234563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.234572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.234850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.234859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.235030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.235041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.235233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.235242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.235529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.235539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.235693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.235704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.236014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.236025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.236393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.236401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.236575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.236583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.236913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.236923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.237131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.237139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.237443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.237451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.237751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.237760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.238064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.238072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.238399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.238407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.238629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.238636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.238944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.238953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.239281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.239290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.239476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.239485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.239753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.239761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.240037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.240046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.240259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.240267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.240555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.240564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.240882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.240891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.241206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.241214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.851 [2024-12-05 13:35:17.241514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.851 [2024-12-05 13:35:17.241523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.851 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.241828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.241837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.242151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.242160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.242507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.242515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.242812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.242819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.243137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.243146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.243526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.243535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.243836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.243844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.244162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.244171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.244442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.244449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.244621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.244630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.244936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.244945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.245309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.245317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.245625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.245633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.245793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.245801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.246105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.246115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.246423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.246431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.246602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.246611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.246925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.246934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.247252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.247260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.247590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.247598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.247901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.247915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.248218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.248227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.248553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.248562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.248726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.248735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.248957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.248966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.249284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.249292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.249481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.249489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.249801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.249809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.249928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.249936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.250100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.250109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.250415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.250423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.250724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.250732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.251051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.251060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.852 [2024-12-05 13:35:17.251379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.852 [2024-12-05 13:35:17.251388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.852 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.251690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.251699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.252002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.252010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.252363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.252372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.252698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.252708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.253022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.253031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.253349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.253358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.253661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.253669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.253994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.254002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.254326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.254335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.254649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.254658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.254973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.254982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.255301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.255310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.255614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.255623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.255910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.255919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.256218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.256226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.256439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.256447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.256737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.256745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.257031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.257039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.257360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.257368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.257674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.257683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.257854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.257866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.258134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.258143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.258324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.258334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.258526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.258535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.258840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.258848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.259152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.259161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.259468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.259478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.259681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.259689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.259969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.259977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.260291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.260301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.260612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.260621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.260914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.260924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.261283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.261292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.261606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.261615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.261903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.261913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.853 [2024-12-05 13:35:17.262241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.853 [2024-12-05 13:35:17.262250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.853 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.262554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.262562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.262761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.262769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.263178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.263188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.263487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.263495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.263834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.263842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.264172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.264182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.264356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.264364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.264703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.264711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.265020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.265029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.265354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.265362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.265658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.265666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.265978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.265987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.266293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.266303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.266614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.266623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.266896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.266904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.267095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.267103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.267468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.267476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.267780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.267788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.268077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.268085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.268377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.268385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.268660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.268669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.268990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.268999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.269315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.269324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.269637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.269647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.269983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.269992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.270311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.270320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.270629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.270637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.270823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.270832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.271148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.271157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.271510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.271519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.271855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.271869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.271971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.271979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.272286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.272296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.272518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.272528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.272853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.272868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.273161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.273170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.854 [2024-12-05 13:35:17.273481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.854 [2024-12-05 13:35:17.273489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.854 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.273814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.273823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.274111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.274120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.274463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.274471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.274847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.274855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.275155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.275162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.275457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.275466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.275761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.275771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.276059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.276067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.276387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.276395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.276578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.276587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.276897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.276906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.277211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.277220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.277551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.277560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.277871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.277880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.278184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.278192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.278478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.278487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.278811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.278820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.279100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.279109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.279270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.279279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.279572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.279581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.279886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.279895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.280193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.280202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.280511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.280519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.280836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.280853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.281200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.281209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.281513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.281521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.281826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.281834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.282018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.282027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.282218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.282227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.282536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.282546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.282873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.282882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.283189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.283198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.283502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.283511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.283807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.283817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.284118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.284127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.284280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.284289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.284589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.284597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.855 [2024-12-05 13:35:17.284903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.855 [2024-12-05 13:35:17.284913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.855 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.285224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.285233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.285521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.285529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.285836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.285844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.286048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.286056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.286359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.286367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.286561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.286569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.286879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.286887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.287224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.287233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.287586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.287594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.287921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.287930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.288272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.288280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.288577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.288586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.288788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.288796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.289074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.289083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.289404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.289412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.289609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.289617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.289921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.289929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.290238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.290248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.290555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.290563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.290852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.290860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.291178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.291186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.291488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.291497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.291780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.291789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.292092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.292100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.292406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.292414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.292707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.292716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.293021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.293031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.293353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.293362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.293701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.293711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.293966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.293974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.294288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.294297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.294583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.294592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.294896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.294905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.295235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.295243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.295433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.295441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.295763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.295773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.296130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.296139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.856 [2024-12-05 13:35:17.296393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.856 [2024-12-05 13:35:17.296402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.856 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.296725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.296734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.296978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.296986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.297316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.297325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.297479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.297488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.297787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.297795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.298094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.298102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.298382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.298390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.298726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.298734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.299031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.299039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.299380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.299389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.299682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.299691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.299995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.300004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.300311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.300320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.300629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.300637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.300791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.300799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.301118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.301127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.301430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.301438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.301728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.301736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.302074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.302083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.302263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.302273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.302565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.302574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.302868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.302876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.303195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.303203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.303491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.303498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.303798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.303808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.304113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.304122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.304407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.304416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.304606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.304614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.304925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.304933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.305222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.305230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.305559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.305567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.305893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.305902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.306203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.857 [2024-12-05 13:35:17.306212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.857 qpair failed and we were unable to recover it. 00:30:54.857 [2024-12-05 13:35:17.306514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.306524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.306857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.306873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.307206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.307214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.307522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.307877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.307886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.308173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.308181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.308486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.308494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.308798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.308807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.309121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.309129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.309436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.309445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.309792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.309800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.309962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.309971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.310262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.310271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.310578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.310586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.310860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.310872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.311158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.311167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.311343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.311351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.311719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.311728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.312034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.312043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.312361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.312369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.312537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.312546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.312851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.312860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.313026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.313034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.313200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.313209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.313529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.313538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.313847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.313855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.314139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.314147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.314453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.314461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.314805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.314814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.315103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.315112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.315438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.315447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.315751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.315761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.316095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.316104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.316414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.316423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.316727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.316737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.317026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.317035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.317340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.317349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.858 [2024-12-05 13:35:17.317660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.858 [2024-12-05 13:35:17.317668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.858 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.317960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.317968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.318296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.318304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.318610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.318617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.318922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.318931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.319248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.319258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.319493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.319501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.319827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.319835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.319995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.320004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.320305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.320313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.320610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.320618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.320930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.320939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.321248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.321257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.321557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.321565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.321850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.321858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.322174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.322183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.322526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.322535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.322841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.322849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.323132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.323140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.323436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.323444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.323770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.323779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.324078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.324087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.324364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.324372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.324685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.324694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.324993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.325001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.325316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.325323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.325630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.325638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.325931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.325939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.326144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.326152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.326322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.326330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.326685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.326694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.326989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.326997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.327346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.327354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.327527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.327535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.327807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.327817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.328088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.328096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.328409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.328417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.328703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.328711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.859 [2024-12-05 13:35:17.328966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.859 [2024-12-05 13:35:17.328975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.859 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.329282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.329290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.329615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.329624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.329928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.329937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.330254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.330262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.330550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.330558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.330854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.330867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.331175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.331183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.331474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.331482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.331786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.331794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.332133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.332141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.332428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.332445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.332736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.332744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.333077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.333086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.333372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.333381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.333684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.333693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.333971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.333979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.334329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.334337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.334632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.334640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.334994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.335003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.335291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.335300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.335609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.335617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.335932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.336236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.336244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.336559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.336568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.336868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.336877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.337183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.337191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.337530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.337539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.337835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.337843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.338159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.338168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.338351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.338360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.338684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.338692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.338879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.338887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.339166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.339174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.339423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.339430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.339755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.339763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.340069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.340080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.340234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.340243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.860 [2024-12-05 13:35:17.340538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.860 [2024-12-05 13:35:17.340547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.860 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.340854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.340864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.341170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.341179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.341471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.341479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.341759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.341766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.341931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.341940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.342253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.342261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.342439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.342447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.342769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.342777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.343076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.343084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.343388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.343397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.343690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.343700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.344008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.344016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.344321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.344330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.344636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.344644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.344950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.344960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.345269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.345277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.345585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.345594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.345884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.345893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.346172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.346180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.346486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.346495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.346789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.346798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.347109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.347118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.347376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.347384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.347675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.347683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.347993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.348003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.348318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.348326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.348649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.348658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.348964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.348973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.349281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.349290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.349603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.349611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.349930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.349939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.350250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.350259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.350563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.350572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.861 qpair failed and we were unable to recover it. 00:30:54.861 [2024-12-05 13:35:17.350753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.861 [2024-12-05 13:35:17.350763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.351081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.351089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.351390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.351407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.351730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.351739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.352032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.352042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.352341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.352350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.352657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.352666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.352821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.352829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.353017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.353025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.353246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.353255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.353560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.353570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.353866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.353874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.354197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.354205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.354513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.354523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.354872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.354881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.355190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.355200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.355528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.355537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.355719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.355727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.356032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.356041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.356403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.356412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.356728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.356738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.357037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.357046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.357358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.357367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.357639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.357648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.357921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.357930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.358251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.358260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.358593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.358603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.358789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.358798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.358993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.359001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.359271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.359280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.359599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.359608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.359961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.359970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.360306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.360315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.360623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.360631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.360782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.360790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.361069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.361078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.361381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.361389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.361660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.361668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.862 qpair failed and we were unable to recover it. 00:30:54.862 [2024-12-05 13:35:17.361978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.862 [2024-12-05 13:35:17.361986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.362308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.362318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.362621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.362629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.362935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.362943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.363257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.363265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.363568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.363586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.363916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.363927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.364237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.364245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.364551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.364559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.364735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.364744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.365030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.365038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.365354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.365362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.365651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.365659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.365815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.365823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.366133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.366141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.366324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.366333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.366640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.366648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.366952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.366960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.367267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.367275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.367579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.367588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.367897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.367906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.368212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.368220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.368465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.368473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.368781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.368789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.369098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.369106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.369446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.369455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.370282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.370302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.370623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.370633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.370945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.370954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.371156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.371165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.371441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.371450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.371721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.371728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.372035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.372045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.372351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.372360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.372668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.372677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.372879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.372888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.373193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.373202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.373511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.373519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.863 qpair failed and we were unable to recover it. 00:30:54.863 [2024-12-05 13:35:17.373825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.863 [2024-12-05 13:35:17.373834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.374190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.374198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.374495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.374503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.374815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.374823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.375022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.375030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.375349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.375357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.375663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.375672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.375961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.375970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.376281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.376292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.376599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.376606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.376943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.376952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.377272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.377280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.377576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.377585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.377875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.377883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.378193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.378203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.378386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.378394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.378708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.378718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.379066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.379074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.379375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.379382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.379691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.379700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.380005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.380015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.380332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.380341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.380616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.380624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.380806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.380816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.381106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.381114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.381437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.381446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.381804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.381813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.382148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.382156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.382326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.382335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.382616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.382626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.382922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.382930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.383263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.383271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.383582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.383591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.383896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.383905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.384229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.384238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.384548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.384557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.384861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.384875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.385156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.385164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.864 qpair failed and we were unable to recover it. 00:30:54.864 [2024-12-05 13:35:17.385325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.864 [2024-12-05 13:35:17.385334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.385595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.385603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.385900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.385908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.386300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.386309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.386659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.386668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.386847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.386856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.387183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.387191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.387361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.387557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.387565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.387834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.387842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.388166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.388176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.388462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.388470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:54.865 [2024-12-05 13:35:17.388824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.865 [2024-12-05 13:35:17.388832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:54.865 qpair failed and we were unable to recover it. 00:30:55.141 [2024-12-05 13:35:17.389127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.141 [2024-12-05 13:35:17.389137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.389421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.389432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.389736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.389745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.390041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.390050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.390360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.390369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.390677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.390685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.391251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.391269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.391556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.391565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.391748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.391758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.392023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.392032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.392340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.392349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.392644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.392654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.392959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.392967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.393273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.393283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.393592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.393601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.393910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.393919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.394236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.394244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.394550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.394558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.394755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.394763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.394956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.394964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.395236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.395245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.395381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.395389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.395670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.395679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.395989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.395997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.396157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.396166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.396476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.396485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.396800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.396809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.397093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.397102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.397284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.397293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.397607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.397615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.397930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.397939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.398142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.398151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.398449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.398458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.398762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.398770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.398948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.398957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.399676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.399692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.400003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.400021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.400336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.400347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.400660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.400669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.400974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.400983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.401269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.401278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.401540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.401549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.401845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.401855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.402053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.402063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.402341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.402349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.402518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.402526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.402832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.402842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.403098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.403106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.403425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.403434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.403757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.403766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.404036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.404046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.404363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.404373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.404659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.404668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.405032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.405040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.405342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.405351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.405536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.405544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.405883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.405892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.406175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.406183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.406387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.406396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.406608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.406616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.406924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.406933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.407291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.407300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.407599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.407607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.407917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.407926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.408087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.408095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.408392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.408401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.408704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.408713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.409003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.409011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.409321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.409329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.409638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.409647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.409926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.409934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.410252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.142 [2024-12-05 13:35:17.410260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.142 qpair failed and we were unable to recover it. 00:30:55.142 [2024-12-05 13:35:17.410442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.410452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.410635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.410643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.410947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.410955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.411259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.411267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.411596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.411604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.411921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.411931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.412132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.412141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.412424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.412432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.412731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.412740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.413036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.413044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.413398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.413407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.413712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.413720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.414029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.414039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.414354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.414363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.414670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.414679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.414988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.414998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.415286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.415296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.415618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.415627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.415930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.415939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.416242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.416252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.416536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.416545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.416869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.416879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.417058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.417068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.417259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.417269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.417549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.417558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.417867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.417877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.418176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.418186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.418486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.418495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.418788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.418798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.419107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.419116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.419422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.419431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.419716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.419725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.420028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.420037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.420344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.420354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.420665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.420675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.421007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.421016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.421322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.421332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.421493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.421504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.421788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.421797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.143 [2024-12-05 13:35:17.422107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.143 [2024-12-05 13:35:17.422116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.143 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.422458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.422468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.422647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.422656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.422978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.422987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.423324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.423333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.423639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.423648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.423954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.423965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.424271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.424279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.424587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.424596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.424786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.424794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.425094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.425102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.425387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.425396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.425743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.425752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.426046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.426055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.426370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.426380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.426684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.426692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.426980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.426989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.427312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.427321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.427625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.427634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.427922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.427930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.428215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.428224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.428538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.428546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.428853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.428866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.429173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.429181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.429485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.429495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.429782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.429791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.430172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.430181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.430355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.430365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.430649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.430658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.430987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.430996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.431303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.431314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.431640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.431648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.431844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.431853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.432171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.432181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.432473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.432481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.432788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.432797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.433100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.433109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.433304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.433312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.144 qpair failed and we were unable to recover it. 00:30:55.144 [2024-12-05 13:35:17.433679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.144 [2024-12-05 13:35:17.433688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.433994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.434003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.434317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.434326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.434501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.434509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.434701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.434709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.435076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.435085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.435382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.435392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.435699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.435707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.435997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.436007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.436312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.436320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.436610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.436619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.436927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.436936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.437279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.437289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.437584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.437593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.437918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.437927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.438243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.438251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.438603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.438612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.438895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.438903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.439233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.439241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.439523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.439531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.439858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.439872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.440064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.440072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.440387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.440396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.440691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.440699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.440874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.440884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.441075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.441083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.441376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.441386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.441723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.441731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.441915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.441924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.442241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.442249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.442554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.442572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.442876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.442885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.145 qpair failed and we were unable to recover it. 00:30:55.145 [2024-12-05 13:35:17.443177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.145 [2024-12-05 13:35:17.443186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.443492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.443500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.443802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.443811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.443969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.443979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.444259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.444267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.444624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.444632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.444959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.444968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.445212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.445219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.445526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.445535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.445811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.445819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.446120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.446128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.446341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.446348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.446678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.446687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.446861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.446875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.447173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.447181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.447496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.447504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.447777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.447786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.448074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.448379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.448387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.448699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.448708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.449015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.449023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.449329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.449339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.449660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.449668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.449988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.449998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.450307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.450316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.450629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.450638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.450953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.450962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.451210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.451218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.451546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.451554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.451819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.451828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.452143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.452152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.452383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.452391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.452569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.452578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.452897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.452905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.453201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.453209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.453529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.453538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.453826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.453835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.454143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.454152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.146 [2024-12-05 13:35:17.454443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.146 [2024-12-05 13:35:17.454452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.146 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.454758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.454767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.455146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.455155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.455451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.455460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.455777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.455787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.456111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.456124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.456425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.456434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.456708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.456717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.456916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.456925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.457191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.457199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.457493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.457502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.457780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.457789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.458063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.458071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.458378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.458386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.458683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.458692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.459012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.459021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.459186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.459194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.459501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.459508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.459908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.459917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.460114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.460122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.460447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.460456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.460703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.460711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.461018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.461027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.461334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.461343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.461659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.461667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.461834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.461842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.462189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.462198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.462504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.462512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.462670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.462679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.463002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.463011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.463337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.463345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.463516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.463524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.463812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.463822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.464148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.464156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.464487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.464496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.464804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.464812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.465127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.465137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.465327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.147 [2024-12-05 13:35:17.465335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.147 qpair failed and we were unable to recover it. 00:30:55.147 [2024-12-05 13:35:17.465668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.465677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.465954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.465963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.466295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.466304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.466608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.466616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.466923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.466931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.467224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.467232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.467517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.467525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.467824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.467834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.468132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.468141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.468458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.468467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.468771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.468780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.469098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.469107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.469486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.469495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.469793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.469803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.469970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.469980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.470333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.470343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.470647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.470657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.470943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.470953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.471270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.471278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.471585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.471593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.471885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.471893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.472219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.472229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.472538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.472546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.472830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.472838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.473004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.473013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.473336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.473345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.473635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.473643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.473838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.473846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.474108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.474117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.474325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.474333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.474645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.474654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.474965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.474974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.475274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.475283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.475471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.475480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.475798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.475806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.476106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.476115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.476421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.476430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.148 [2024-12-05 13:35:17.476660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.148 [2024-12-05 13:35:17.476669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.148 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.476978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.476987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.477184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.477192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.477464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.477473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.477764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.477772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.478080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.478089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.478391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.478401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.478688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.478698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.479015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.479025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.479231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.479240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.479527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.479537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.479871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.479879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.480157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.480165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.480501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.480509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.480815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.480824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.481197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.481206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.481986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.482003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.482358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.482368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.482682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.482691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.483002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.483019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.483315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.483325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.483639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.483648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.483982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.483990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.484304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.484313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.484647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.484656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.484971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.484980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.485275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.485283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.485609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.485618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.485974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.485982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.486179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.486187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.486408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.486416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.486676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.486684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.487001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.487010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.487334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.487342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.487539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.487547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.487622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.487631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.487823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.487831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.488190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.488199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.149 [2024-12-05 13:35:17.488372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.149 [2024-12-05 13:35:17.488380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.149 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.488703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.488712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.489020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.489029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.489378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.489387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.489715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.489725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.489916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.489924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.490271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.490281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.490496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.490504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.490870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.490879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.491202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.491210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.491276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.491284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.491611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.491621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.491850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.491865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.492126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.492134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.492449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.492458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.492758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.492766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.493083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.493093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.493409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.493418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.493716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.493725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.494037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.494046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.494378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.494387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.494688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.494696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.495000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.495009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.495409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.495418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.495746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.495755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.496044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.496053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.496279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.496289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.496505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.496514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.496673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.496683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.496974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.496983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.497296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.497304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.497594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.497604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.497800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.497809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.498161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.150 [2024-12-05 13:35:17.498171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.150 qpair failed and we were unable to recover it. 00:30:55.150 [2024-12-05 13:35:17.498477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.498486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.498692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.498702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.498988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.498997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.499329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.499338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.499650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.499659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.499984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.499993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.500180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.500188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.500498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.500506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.500664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.500671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.500948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.500957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.501160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.501169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.501391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.501398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.501723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.501731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.502037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.502046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.502381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.502390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.502706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.502714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.503010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.503018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.503359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.503368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.503567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.503577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.503878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.503887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.504114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.504122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.504329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.504337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.504649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.504657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.504978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.504987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.505275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.505284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.505445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.505453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.505755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.505764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.506006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.506014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.506328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.506337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.506671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.506678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.506967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.506976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.507292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.507300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.507607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.507616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.507959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.507970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.508283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.508292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.508586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.508594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.508886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.508896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.151 [2024-12-05 13:35:17.509209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.151 [2024-12-05 13:35:17.509218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.151 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.509521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.509529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.509834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.509843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.510062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.510071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.510246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.510254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.510359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.510365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.510684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.510693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.511013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.511023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.511239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.511247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.511563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.511571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.511927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.511937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.512236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.512244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.512466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.512474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.512776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.512784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.513097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.513107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.513415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.513423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.513778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.513786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.513958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.513966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.514274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.514282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.514496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.514504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.514675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.514683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.515006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.515017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.515343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.515351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.515655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.515663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.515856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.515879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.516187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.516197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.516498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.516506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.516818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.516828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.516955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.516964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.517301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.517310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.517476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.517486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.517818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.517828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.518132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.518141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.518449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.518458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.518633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.518642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.518853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.518865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.519175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.519185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.519502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.519511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.152 [2024-12-05 13:35:17.519802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.152 [2024-12-05 13:35:17.519809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.152 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.520152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.520162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.520357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.520365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.520689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.520698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.520895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.520904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.521102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.521110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.521429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.521438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.521526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.521535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.521730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.521739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.522054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.522063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.522462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.522471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.522779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.522795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.523105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.523114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.523405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.523413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.523800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.523810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.524131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.524140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.524438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.524446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.524653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.524660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.524981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.524990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.525319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.525327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.525652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.525660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.525970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.525979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.526186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.526195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.526495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.526505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.526827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.526836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.527008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.527017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.527349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.527357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.527670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.527680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.527991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.528000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.528197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.528205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.528520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.528529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.528864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.528873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.529029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.529038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.529362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.529371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.529663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.529671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.529986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.529995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.530317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.530325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.530703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.530712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.153 [2024-12-05 13:35:17.530964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.153 [2024-12-05 13:35:17.530972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.153 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.531310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.531318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.531620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.531629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.531904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.531913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.532248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.532256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.532548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.532557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.532802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.532811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.533108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.533116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.533411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.533419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.533603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.533610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.533933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.533942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.534279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.534287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.534600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.534609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.534922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.534931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.535206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.535215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.535542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.535553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.535750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.535759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.536120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.536130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.536426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.536434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.536747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.536757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.537085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.537094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.537401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.537410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.537761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.537769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.538070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.538079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.538408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.538417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.538750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.538765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.539056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.539064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.539389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.539398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.539727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.539736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.540137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.540147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.540468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.540477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.540814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.540823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.541134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.541144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.541331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.541340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.541533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.541542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.154 [2024-12-05 13:35:17.541726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.154 [2024-12-05 13:35:17.541736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.154 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.542069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.542078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.542395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.542404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.542701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.542709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.543041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.543050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.543231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.543241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.543532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.543540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.543849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.543857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.544168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.544177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.544476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.544485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.544768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.544776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.545046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.545055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.545412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.545420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.545723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.545731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.546065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.546074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.546480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.546488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.546801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.546810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.547125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.547134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.547447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.547455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.547654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.547663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.547949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.547958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.548297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.548305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.548608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.548615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.548936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.548945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.549166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.549174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.549509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.549517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.549708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.549716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.549910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.549918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.550111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.550120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.550428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.550436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.550776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.550785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.550990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.550998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.551168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.551176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.551376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.551384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.551710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.551718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.552075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.552091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.552373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.552381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.552697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.552714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.552937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.552945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.553287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.553295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.553638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.553647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.155 [2024-12-05 13:35:17.553941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.155 [2024-12-05 13:35:17.553949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.155 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.554324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.554332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.554667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.554675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.554987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.554995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.555212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.555221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.555518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.555527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.555850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.555859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.556133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.556143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.556367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.556376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.556634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.556643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.556816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.556826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.557070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.557079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.557370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.557378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.557698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.557707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.558025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.558034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.558342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.558350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.558600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.558608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.558893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.558901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.559223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.559231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.559517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.559524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.559834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.559842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.560158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.560167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.560346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.560354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.560564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.560571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.560889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.560898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.561310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.561318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.561490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.561498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.561843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.561853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.562078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.562087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.562261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.562271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.562571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.562581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.562893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.562901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.563120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.563128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.563430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.563439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.563726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.563734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.564042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.564052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.564392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.564400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.564734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.564742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.565006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.565014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.565348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.565356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.565500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.156 [2024-12-05 13:35:17.565508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.156 qpair failed and we were unable to recover it. 00:30:55.156 [2024-12-05 13:35:17.565821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.565830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.566123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.566132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.566443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.566451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.566757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.566766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.567059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.567068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.567398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.567406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.567704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.567713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.568031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.568040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.568217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.568225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.568483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.568492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.568764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.568772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.569062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.569070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.569223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.569232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.569418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.569426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.569746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.569754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.570041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.570050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.570408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.570416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.570710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.570718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.570931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.570939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.571153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.571161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.571423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.571431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.571627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.571634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.571948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.571957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.572140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.572149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.572435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.572443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.572758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.572767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.573090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.573099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.573325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.573333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.573599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.573609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.573869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.573877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.574170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.574178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.574386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.574394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.574667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.574676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.574990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.574999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.575314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.575322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.575598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.575607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.575784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.575793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.576150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.576158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.576409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.576417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.576710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.576719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.577035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.577044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.577352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.577360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.577650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.577658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.157 qpair failed and we were unable to recover it. 00:30:55.157 [2024-12-05 13:35:17.577981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.157 [2024-12-05 13:35:17.577990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.578357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.578366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.578662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.578670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.578984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.578992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.579300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.579308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.579605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.579613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.579783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.579791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.580109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.580118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.580298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.580306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.580580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.580590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.580810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.580818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.581132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.581141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.581291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.581300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.581605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.581614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.581830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.581839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.582159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.582168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.582486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.582494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.582698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.582706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.583006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.583015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.583338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.583347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.583534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.583543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.583849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.583858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.584173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.584182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.584525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.584534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.584858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.584870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.585179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.585188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.585570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.585578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.585910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.585919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.586266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.586274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.586547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.586555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.586861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.586874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.587179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.587188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.587532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.587540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.587868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.587876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.588180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.588188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.588484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.588493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.588814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.588823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.589152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.589161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.589472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.589480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.589793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.158 [2024-12-05 13:35:17.589802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.158 qpair failed and we were unable to recover it. 00:30:55.158 [2024-12-05 13:35:17.589841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.589849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.590170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.590178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.590488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.590496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.590816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.590824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.591142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.591151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.591414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.591422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.591720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.591728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.592031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.592039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.592364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.592372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.592716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.592725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.592917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.592926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.593272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.593280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.593593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.593601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.593892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.593900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.594244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.594252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.594543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.594551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.594736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.594744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.594931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.594939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.595215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.595225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.595526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.595534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.595726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.595734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.596041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.596049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.596401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.596410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.596749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.596757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.597079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.597087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.597411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.597422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.597631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.597639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.597867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.597877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.598213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.598221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.598392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.598401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.598586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.598594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.598793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.598803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.599176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.599184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.599496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.599504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.599813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.599821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.600131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.600140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.600510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.600519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.600833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.600842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.601109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.601117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.601416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.601424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.601753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.601763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.602095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.602105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.602533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.602542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.602849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.159 [2024-12-05 13:35:17.602858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.159 qpair failed and we were unable to recover it. 00:30:55.159 [2024-12-05 13:35:17.603166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.603175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.603473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.603482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.603814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.603823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.604120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.604129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.604304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.604314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.604606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.604614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.604983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.604992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.605321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.605329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.605510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.605519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.605821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.605829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.606150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.606159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.606313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.606321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.606706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.606714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.606992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.607000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.607327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.607336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.607663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.607671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.608022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.608030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.608246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.608254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.608604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.608612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.608848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.608855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.609210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.609219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.609517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.609534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.609698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.609708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.609896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.609912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.610207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.610215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.610288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.610296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.610555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.610563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.610765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.610773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.611063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.611071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.611363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.611371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.611645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.611653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.611850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.611858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.612182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.612199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.612490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.612499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.612835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.612844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.613167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.613176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.613496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.613506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.613840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.613849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.614159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.614168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.614472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.614481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.614690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.614699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.614984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.614992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.615217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.615224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.615497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.615506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.160 [2024-12-05 13:35:17.615810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.160 [2024-12-05 13:35:17.615818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.160 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.616120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.616128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.616445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.616453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.616794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.616803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.617102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.617112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.617305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.617313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.617602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.617610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.617933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.617942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.618273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.618281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.618567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.618576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.618755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.618763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.618998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.619007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.619292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.619301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.619609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.619617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.619922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.619931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.620270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.620279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.620590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.620598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.620912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.620920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.621228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.621236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.621605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.621613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.621963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.621972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.622208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.622216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.622553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.622562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.622743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.622752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.623065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.623074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.623392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.623402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.623701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.623709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.624054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.624063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.624369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.624378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.624654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.624662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.624943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.624950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.625271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.625280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.625592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.625600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.625803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.625811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.626166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.626174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.626459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.626468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.626765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.626773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.627108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.627117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.627423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.627430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.627717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.627725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.628042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.628051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.628274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.628282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.628550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.628558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.628867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.161 [2024-12-05 13:35:17.628876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.161 qpair failed and we were unable to recover it. 00:30:55.161 [2024-12-05 13:35:17.629195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.629205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.629359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.629369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.629631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.629639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.629834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.629842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.630038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.630047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.630336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.630344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.630690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.630698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.631010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.631018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.631284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.631292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.631468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.631475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.631712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.631722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.631894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.631903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.632238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.632246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.632542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.632551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.632747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.632755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.633096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.633105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.633416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.633425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.633652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.633660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.633924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.633932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.634311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.634319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.634617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.634626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.634826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.634834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.635115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.635124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.635212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.635220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.635542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.635550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.635761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.635769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.636063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.636072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.636392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.636401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.636733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.636743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.637052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.637061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.637239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.637248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.637446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.637454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.637764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.637772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.638066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.638074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.638402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.638410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.638627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.638635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.638815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.638825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.639248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.639257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.639446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.639455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.639770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.639779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.640127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.640138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.640451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.162 [2024-12-05 13:35:17.640459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.162 qpair failed and we were unable to recover it. 00:30:55.162 [2024-12-05 13:35:17.640745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.640754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.640960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.640969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.641279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.641287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.641591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.641599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.641907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.641915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.642193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.642202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.642510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.642518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.642819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.642828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.643165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.643174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.643484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.643492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.643832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.643840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.644054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.644062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.644427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.644436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.644759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.644768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.645162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.645171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.645519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.645528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.645759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.645769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.646090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.646099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.646407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.646416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.646748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.646757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.647066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.647075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.647377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.647386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.647717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.647726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.648035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.648044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.648341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.648348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.648655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.648663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.648936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.648945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.649322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.649330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.649656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.649665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.649931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.649940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.650256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.650264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.650578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.650586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.650898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.650907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.651096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.651104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.651392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.651400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.651712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.651720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.651928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.651937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.652211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.652219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.652547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.652558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.652870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.652879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.653199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.653208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.653503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.653511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.653850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.653858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.654228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.654237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.654424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.654432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.654752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.654761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.654978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.654987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.163 qpair failed and we were unable to recover it. 00:30:55.163 [2024-12-05 13:35:17.655331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.163 [2024-12-05 13:35:17.655340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.655649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.655658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.655939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.655948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.656274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.656282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.656623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.656631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.656964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.656973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.657294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.657302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.657504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.657512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.657664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.657673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.657979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.657987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.658341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.658350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.658678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.658686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.658954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.658963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.659244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.659252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.659568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.659577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.659894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.659903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.660120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.660128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.660419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.660427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.660745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.660755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.661130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.661139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.661297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.661305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.661615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.661623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.661841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.661848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.662173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.662181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.662403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.662411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.662608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.662617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.662919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.662928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.663154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.663162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.663476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.663486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.663782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.663791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.664107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.664116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.664291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.664301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.664609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.664618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.664820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.664829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.665024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.665033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.665217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.665227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.665557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.665566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.665890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.665899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.666195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.666204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.666536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.666544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.666830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.666837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.667162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.667170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.667566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.667575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.667874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.667882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.668091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.668099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.668305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.668313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.668498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.668506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.668670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.164 [2024-12-05 13:35:17.668679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.164 qpair failed and we were unable to recover it. 00:30:55.164 [2024-12-05 13:35:17.668999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.669007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.669324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.669333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.669694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.669702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.669914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.669922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.670144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.670152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.670498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.670508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.670826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.670835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.671067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.671076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.671425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.671434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.671744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.671752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.671917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.671926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.672209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.672217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.672529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.672538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.672870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.672880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.673195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.673203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.673513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.673522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.673799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.673807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.674160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.674168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.674483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.674491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.674672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.674681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.675012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.675021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.675345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.675354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.675661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.675670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.675979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.675990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.676244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.676252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.676337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.676345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.676616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.676624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.676911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.676920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.677227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.677236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.677542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.677551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.677730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.677739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.677942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.677951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.678330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.678338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.678708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.678716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.678943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.678952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.679254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.679262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.679537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.679545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.679871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.679880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.680173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.680181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.680475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.680483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.680642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.680650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.680938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.680947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.681274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.681282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.681573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.681582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.681940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.681948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.682127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.682135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.682337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.165 [2024-12-05 13:35:17.682344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.165 qpair failed and we were unable to recover it. 00:30:55.165 [2024-12-05 13:35:17.682653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.682661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.682766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.682774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.683113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.683123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.683337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.683346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.683628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.683636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.683920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.683928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.684256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.684265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.684456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.684464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.684739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.684747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.684954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.684963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.685306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.685315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.685538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.685546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.685860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.685871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.686097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.686106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.686405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.686412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.686605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.686614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.686900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.686910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.687309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.687317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.687508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.687517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.687717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.687725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.688013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.688022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.688416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.688424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.688741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.688749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.688839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.688845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.689165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.689173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.689490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.689499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.689677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.689684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.689809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.689817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.690126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.690135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.690203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.690210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.690404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.690412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.690614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.690623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.690802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.690810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.690942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.690949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.691270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.691279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.691590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.691599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.166 [2024-12-05 13:35:17.691940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.166 [2024-12-05 13:35:17.691948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.166 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.692072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.692080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.692375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.692384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.692592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.692599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.692910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.692919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.693241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.693250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.693545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.693553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.693757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.693764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.694145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.694154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.694492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.694501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.694836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.694845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.695140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.695149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.695448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.695776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.695785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.696097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.696106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.696431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.696440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.696753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.696762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.697060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.697070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.697395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.697405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.697558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.697568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.697849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.697860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.698224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.698234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.698546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.698555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.698706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.698715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.699061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.441 [2024-12-05 13:35:17.699070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.441 qpair failed and we were unable to recover it. 00:30:55.441 [2024-12-05 13:35:17.699380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.699389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.699709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.699717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.699972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.699980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.700179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.700190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.700561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.700569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.700899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.700907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.701229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.701237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.701428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.701435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.701810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.701818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.702188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.702197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.702505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.702513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.702915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.702923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.703261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.703270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.703575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.703584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.703919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.703928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.704261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.704269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.704571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.704578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.704873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.704882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.705175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.705184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.705518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.705526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.705835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.705843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.706153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.706161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.706368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.706376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.706522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.706530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.706839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.706847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.707078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.707086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.707272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.707281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.707588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.707597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.707813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.707822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.708134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.708143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.708346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.708353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.708647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.708656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.708994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.709002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.709198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.709206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.709353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.709362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.709654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.709664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.709834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.709843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.442 [2024-12-05 13:35:17.710076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.442 [2024-12-05 13:35:17.710085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.442 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.710388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.710396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.710729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.710738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.710895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.710904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.711114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.711122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.711443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.711451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.711667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.711675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.711853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.711861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.712077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.712085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.712446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.712455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.712787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.712796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.713201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.713211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.713551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.713559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.713859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.713870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.714163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.714171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.714381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.714389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.714736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.714745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.715168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.715178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.715511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.715520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.715825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.715835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.716138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.716148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.716461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.716471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.716664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.716674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.717002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.717012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.717355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.717364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.717548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.717559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.717769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.717778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.718114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.718124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.718334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.718344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.718554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.718563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.718876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.718886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.719198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.719207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.719536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.719546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.719872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.719882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.720183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.720193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.720464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.720473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.720515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.720523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.720836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.720845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.443 qpair failed and we were unable to recover it. 00:30:55.443 [2024-12-05 13:35:17.721155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.443 [2024-12-05 13:35:17.721167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.721470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.721479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.721650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.721659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.722010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.722021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.722297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.722306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.722605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.722615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.722715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.722723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.723004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.723014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.723199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.723208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.723419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.723428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.723734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.723743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.724195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.724205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.724292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.724301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.724513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.724523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.724788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.724797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.725107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.725117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.725405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.725414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.725712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.725721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.726038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.726048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.726348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.726358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.726556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.726565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.726842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.726852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.727205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.727215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.727516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.727526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.727869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.727879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.728235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.728245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.728550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.728559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.728734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.728744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.729032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.729042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.729270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.729280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.729577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.729587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.729879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.729888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.730164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.730174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.730512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.730521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.730819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.730828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.731143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.731152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.731470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.731479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.731770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.444 [2024-12-05 13:35:17.731778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.444 qpair failed and we were unable to recover it. 00:30:55.444 [2024-12-05 13:35:17.732037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.732046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.732253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.732261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.732549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.732559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.732746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.732758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.733081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.733090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.733427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.733436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.733712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.733720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.733911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.733919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.734110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.734119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.734466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.734474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.734679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.734686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.734967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.734976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.735156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.735165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.735482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.735490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.735795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.735803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.735908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.735915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.736145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.736154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.736475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.736484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.736792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.736800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.737123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.737132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.737450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.737458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.737538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.737546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.737654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.737663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.737894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.737904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.738160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.738169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.738444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.738453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.738747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.738756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.738931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.738939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.739263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.739272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.739566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.739575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.739895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.739904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.445 [2024-12-05 13:35:17.740177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.445 [2024-12-05 13:35:17.740185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.445 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.740484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.740493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.740673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.740681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.740896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.740904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.741276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.741284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.741593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.741601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.741920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.741929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.742312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.742320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.742618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.742626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.742938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.742948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.743300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.743309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.743498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.743509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.743764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.743773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.743977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.743986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.744323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.744332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.744660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.744668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.744986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.744995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.745341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.745349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.745549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.745848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.745857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.746151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.746159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.746477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.746485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.746671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.746679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.746981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.746991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.747310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.747319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.747466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.747475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.747766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.747775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.748076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.748084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.748404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.748412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.748737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.748746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.749074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.749082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.749401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.749410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.749704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.749714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.750010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.750019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.750366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.750374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.750668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.750676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.751054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.751062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.751406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.446 qpair failed and we were unable to recover it. 00:30:55.446 [2024-12-05 13:35:17.751774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.446 [2024-12-05 13:35:17.751783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.751972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.751982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.752266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.752274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.752574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.752583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.752900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.752909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.753108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.753117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.753289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.753297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.753617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.753627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.753802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.753811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.754126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.754134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.754289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.754297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.754633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.754642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.754814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.754822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.755106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.755115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.755415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.755423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.755640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.755648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.755977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.755986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.756317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.756326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.756618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.756628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.756977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.756986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.757160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.757168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.757472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.757481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.757795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.757803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.758116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.758124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.758419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.758428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.758716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.758724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.759049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.759058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.759383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.759391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.759711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.759719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.760028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.760037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.760220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.760230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.760544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.760554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.760872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.760881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.761166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.761174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.761444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.761454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.761759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.761768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.762138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.762146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.762499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.762517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.447 qpair failed and we were unable to recover it. 00:30:55.447 [2024-12-05 13:35:17.762838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.447 [2024-12-05 13:35:17.762848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.763067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.763077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.763394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.763404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.763738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.763747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.764060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.764068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.764376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.764384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.764661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.764670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.764986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.764994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.765301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.765309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.765610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.765618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.765914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.765922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.766234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.766243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.766548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.766558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.766850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.766858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.767084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.767092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.767302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.767311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.767615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.767624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.767816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.767824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.768023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.768031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.768208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.768217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.768397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.768406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.768733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.768743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.769056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.769065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.769374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.769382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.769684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.769693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.769870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.769879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.770184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.770193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.770497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.770506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.770844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.770852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.771187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.771196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.771484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.771494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.771832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.771840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.772103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.772112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.772413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.772422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.772753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.772762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.772943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.772953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.773127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.773136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.773415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.448 [2024-12-05 13:35:17.773424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.448 qpair failed and we were unable to recover it. 00:30:55.448 [2024-12-05 13:35:17.773735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.773744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.774106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.774114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.774410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.774418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.774783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.774793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.774953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.774963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.775245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.775253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.775468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.775477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.775859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.775872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.776182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.776191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.776494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.776503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.776809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.776818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.777202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.777212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.777417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.777426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.777621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.777630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.777933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.777941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.778277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.778287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.778593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.778602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.778935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.778944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.779252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.779261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.779447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.779455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.779745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.779754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.779955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.779963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.780311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.780320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.780667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.780675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.780953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.780962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.781293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.781303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.781627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.781635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.781838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.781847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.782160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.782169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.782351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.782359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.782519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.782529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.782837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.782846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.783115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.783124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.783429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.783438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.783754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.783762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.783969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.783977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.784360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.784370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.784716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.784726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.449 qpair failed and we were unable to recover it. 00:30:55.449 [2024-12-05 13:35:17.784902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.449 [2024-12-05 13:35:17.784912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.785195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.785203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.785388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.785397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.785712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.785721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.785958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.785965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.786295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.786304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.786597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.786606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.786801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.786809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.787179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.787188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.787360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.787369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.787671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.787680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.788017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.788026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.788182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.788192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.788511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.788519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.788827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.788836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.789156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.789164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.789350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.789357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.789680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.789689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.790050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.790060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.790408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.790418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.790700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.790708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.791018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.791026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.791330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.791337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.791532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.791541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.791847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.791856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.792143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.792151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.792458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.792466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.792791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.792800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.793115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.793124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.793464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.793473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.793809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.793818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.794152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.794161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.794452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.794461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.794753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.794763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.450 qpair failed and we were unable to recover it. 00:30:55.450 [2024-12-05 13:35:17.795045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.450 [2024-12-05 13:35:17.795055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.795372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.795381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.795670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.795680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.795987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.795996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.796316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.796325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.796485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.796494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.796805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.796813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.796908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.796917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.797305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.797313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.797656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.797665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.797970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.797978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.798304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.798312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.798620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.798629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.798924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.798934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.798986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.798995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.799173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.799182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.799512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.799521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.799810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.799819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.800104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.800112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.800363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.800371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.800568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.800576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.800896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.800905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.801225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.801235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.801393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.801401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.801718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.801726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.802031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.802040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.802348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.802357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.802513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.802522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.802822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.802830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.803185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.803194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.803495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.803503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.803812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.803821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.804166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.804175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.804485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.804494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.804801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.804810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.805107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.805116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.805419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.805428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.805735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.805744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.451 qpair failed and we were unable to recover it. 00:30:55.451 [2024-12-05 13:35:17.805990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.451 [2024-12-05 13:35:17.805999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.806308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.806317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.806488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.806498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.806779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.806788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.807120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.807130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.807313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.807322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.807483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.807492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.807819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.807827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.808158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.808167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.808459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.808468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.808771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.808780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.809090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.809099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.809430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.809438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.809742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.809751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.810053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.810063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.810369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.810378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.810681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.810691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.811024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.811033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.811358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.811367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.811559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.811568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.811837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.811845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.812172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.812181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.812451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.812459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.812641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.812649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.812957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.812965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.813282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.813289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.813598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.813606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.813907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.813916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.814240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.814248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.814552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.814560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.814906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.814915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.815219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.815227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.815498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.815506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.815825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.815833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.816189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.816197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.816533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.816542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.816866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.816875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.817181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.817190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.452 qpair failed and we were unable to recover it. 00:30:55.452 [2024-12-05 13:35:17.817480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.452 [2024-12-05 13:35:17.817488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.817632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.817641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.817886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.817894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.818196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.818206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.818358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.818367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.818584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.818593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.818893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.818903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.818966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.818974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.819284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.819293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.819631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.819640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.819966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.819975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.820321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.820330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.820649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.820657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.820958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.820966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.821260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.821268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.821477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.821485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.821671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.821680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.822004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.822013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.822333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.822342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.822669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.822677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.822871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.822880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.823171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.823180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.823478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.823486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.823824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.823832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.824120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.824129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.824318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.824326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.824608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.824617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.824933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.824941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.825214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.825222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.825549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.825557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.825842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.825850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.825972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.825979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.826275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.826284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.826607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.826617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.826955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.826963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.827297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.827306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.827658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.827667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.827903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.827910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.453 qpair failed and we were unable to recover it. 00:30:55.453 [2024-12-05 13:35:17.828211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.453 [2024-12-05 13:35:17.828220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.828538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.828546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.828888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.828896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.829244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.829252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.829562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.829570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.829878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.829887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.830206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.830215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.830543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.830551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.830873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.830882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.831168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.831176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.831488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.831496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.831789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.831797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.832103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.832111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.832433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.832441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.832597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.832605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.832871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.832880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.833120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.833129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.833485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.833492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.833781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.833790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.834038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.834046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.834336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.834344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.834539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.834547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.834868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.834878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.834994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.835001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.835330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.835339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.835672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.835681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.835994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.836002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.836180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.836188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.836370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.836378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.836651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.836660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.836877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.836885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.837103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.837112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.837447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.837455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.837785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.837793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.837966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.837975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.838387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.838396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.838560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.838568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.838879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.454 [2024-12-05 13:35:17.838888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.454 qpair failed and we were unable to recover it. 00:30:55.454 [2024-12-05 13:35:17.839201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.839210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.839512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.839521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.839821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.839829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.840129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.840138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.840452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.840459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.840767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.840776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.841087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.841095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.841386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.841394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.841704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.841712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.842027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.842036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.842240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.842248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.842524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.842533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.842696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.842704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.842993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.843002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.843323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.843331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.843641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.843650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.843943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.843952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.844319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.844327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.844492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.844500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.844845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.844853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.845166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.845177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.845485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.845494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.845797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.845805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.846195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.846204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.846494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.846502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.846846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.846853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.847169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.847178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.847471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.847480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.847692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.847699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.847980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.847989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.848203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.848211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.848373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.848381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.848561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.848569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.848775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.848783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.849097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.455 [2024-12-05 13:35:17.849105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.455 qpair failed and we were unable to recover it. 00:30:55.455 [2024-12-05 13:35:17.849436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.849444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.849631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.849639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.849737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.849745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.850174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.850182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.850481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.850489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.850804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.850812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.851029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.851038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.851368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.851376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.851559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.851566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.851754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.851763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.852055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.852064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.852491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.852500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.852692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.852701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.853025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.853034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.853334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.853342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.853504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.853513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.853773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.853781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.854099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.854107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.854496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.854504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.854796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.854804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.854988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.854997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.855196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.855204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.855496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.855504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.855817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.855825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.856158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.856167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.856489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.856499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.856851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.856859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.857083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.857091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.857385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.857394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.857699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.857708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.857917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.857925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.858119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.858127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.858451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.858459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.858668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.858675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.858873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.858881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.859166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.859175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.859483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.859491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.859802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.456 [2024-12-05 13:35:17.859810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.456 qpair failed and we were unable to recover it. 00:30:55.456 [2024-12-05 13:35:17.860119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.860128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.860415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.860423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.860641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.860649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.860925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.860934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.861157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.861164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.861362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.861379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.861720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.861729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.862043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.862051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.862356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.862364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.862638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.862646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.862888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.862896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.863211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.863219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.863577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.863584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.863900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.863908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.863992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.864000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.864294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.864302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.864592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.864600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.864893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.864901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.865290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.865298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.865635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.865642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.865938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.865946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.866016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.866023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.866140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.866147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.866316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.866324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.866551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.866559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.866742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.866750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.867123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.867131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.867309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.867319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.867584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.867593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.867779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.867788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.868140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.868148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.868469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.868477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.868815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.868822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.869131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.869140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.869291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.869298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.869475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.869483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.869793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.869801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.870111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.457 [2024-12-05 13:35:17.870119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.457 qpair failed and we were unable to recover it. 00:30:55.457 [2024-12-05 13:35:17.870429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.870437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.870798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.870806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.871004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.871013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.871343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.871352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.871545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.871552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.871876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.871885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.872197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.872205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.872530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.872537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.872844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.872851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.873177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.873185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.873488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.873496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.873794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.873801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.874122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.874130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.874482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.874490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.874805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.874813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.875135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.875143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.875538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.875545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.875857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.875871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.876048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.876056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.876362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.876369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.876698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.876707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.876908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.876916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.877222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.877230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.877550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.877558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.877868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.877876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.878091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.878099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.878399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.878407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.878703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.878711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.878932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.878941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.879259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.879268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.879579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.879587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.879877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.879886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.880116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.880124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.880451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.880459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.880751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.880759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.881053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.881062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.881392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.881400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.881703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.458 [2024-12-05 13:35:17.881712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.458 qpair failed and we were unable to recover it. 00:30:55.458 [2024-12-05 13:35:17.882017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.882024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.882372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.882380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.882685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.882693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.882959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.882968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.883296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.883304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.883625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.883632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.883893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.883901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.884236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.884244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.884434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.884442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.884721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.884729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.884938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.884947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.885259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.885267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.885607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.885615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.885895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.885903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.886103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.886111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.886321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.886328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.886701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.886709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.887002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.887011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.887340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.887348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.887649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.887657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.887848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.887857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.888205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.888213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.888411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.888419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.888726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.888734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.888977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.888985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.889083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.889089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.889392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.889401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.889711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.889720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.890052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.890060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.890366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.890375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.890700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.890708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.891102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.891112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.891459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.891466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.891645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.459 [2024-12-05 13:35:17.891653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.459 qpair failed and we were unable to recover it. 00:30:55.459 [2024-12-05 13:35:17.891937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.891945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.892265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.892273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.892585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.892593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.892900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.892909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.893229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.893237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.893555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.893563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.893875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.893883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.894172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.894179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.894387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.894396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.894709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.894717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.895054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.895062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.895387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.895394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.895674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.895682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.895895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.895903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.896259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.896268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.896577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.896586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.896892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.896900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.897119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.897127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.897456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.897463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.897773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.897781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.897936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.897945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.898216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.898224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.898529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.898538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.898875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.898883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.899220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.899228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.899545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.899553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.899836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.899844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.900152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.900160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.900385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.900393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.900704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.900711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.901008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.901017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.901355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.901364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.901672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.901680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.901989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.901997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.902210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.902218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.902515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.902523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.902770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.902777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.903105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.460 [2024-12-05 13:35:17.903113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.460 qpair failed and we were unable to recover it. 00:30:55.460 [2024-12-05 13:35:17.903422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.903430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.903612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.903621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.903979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.903987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.904321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.904328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.904514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.904522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.904829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.904837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.905111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.905119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.905421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.905429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.905735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.905743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.906041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.906049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.906328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.906336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.906665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.906673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.906948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.906956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.907271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.907280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.907590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.907598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.907795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.907803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.908099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.908108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.908444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.908452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.908736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.908745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.909061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.909069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.909367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.909375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.909684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.909691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.909891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.909899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.910090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.910098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.910255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.910263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.910579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.910587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.910892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.910902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.911215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.911223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.911490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.911497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.911831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.911839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.912066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.912075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.912338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.912346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.912460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.912468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.912683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.912690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.912909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.912917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.913260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.913268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.913570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.913577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.913919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.461 [2024-12-05 13:35:17.913928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.461 qpair failed and we were unable to recover it. 00:30:55.461 [2024-12-05 13:35:17.914187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.914194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.914510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.914519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.914698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.914706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.914950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.914958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.915218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.915226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.915375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.915383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.915584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.915592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.915876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.915884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.916176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.916184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.916468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.916476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.916758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.916765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.917116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.917124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.917412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.917421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.917685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.917693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.917971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.917979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.918165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.918174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.918490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.918498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.918812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.918819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.919127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.919136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.919439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.919448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.919825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.919834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.920183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.920191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.920500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.920508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.920809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.920818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.921129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.921138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.921441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.921448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.921644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.921911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.921919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.922288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.922298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.922603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.922611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.922980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.922988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.923293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.923301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.923617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.923625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.923783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.923790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.923971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.923981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.924315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.924323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.924516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.924524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.924720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.924729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.462 qpair failed and we were unable to recover it. 00:30:55.462 [2024-12-05 13:35:17.924897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.462 [2024-12-05 13:35:17.924905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.925260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.925268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.925613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.925622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.925815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.925823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.926124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.926132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.926440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.926449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.926764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.926772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.927145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.927153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.927470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.927478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.927819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.927827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.928002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.928011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.928381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.928389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.928702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.928710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.928904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.928912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.929125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.929133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.929450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.929458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.929618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.929626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.929732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.929740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.930053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.930062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.930237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.930246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.930562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.930571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.930797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.930805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.931063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.931071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.931273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.931281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.931495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.931503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.931770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.931779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.932111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.932119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.932326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.932335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.932649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.932656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.932849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.932858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.933138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.933150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.933483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.933491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.933649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.933657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.933933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.933941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.934123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.934130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.934318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.934326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.934665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.934672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.934995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.935002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.463 qpair failed and we were unable to recover it. 00:30:55.463 [2024-12-05 13:35:17.935294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.463 [2024-12-05 13:35:17.935301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.935629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.935637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.935987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.935994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.936299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.936305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.936631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.936638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.936857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.936867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.937172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.937178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.937469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.937476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.937771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.937779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.938054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.938061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.938247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.938254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.938613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.938620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.938912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.938919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.939001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.939008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.939324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.939331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.939532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.939539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.939816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.939824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.940040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.940048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.940376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.940383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.940692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.940699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.940962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.940970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.941286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.941293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.941632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.941641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.941928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.941937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.942263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.942270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.942425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.942433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.942732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.942740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.943035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.943042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.943343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.943349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.943666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.943673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.943963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.943970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.944291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.944298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.944601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.944612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.944898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.944906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.464 [2024-12-05 13:35:17.945238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.464 [2024-12-05 13:35:17.945245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.464 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.945557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.945563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.945880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.945887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.946171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.946178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.946466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.946473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.946782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.946789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.947088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.947095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.947408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.947415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.947704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.947710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.947958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.947965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.948299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.948307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.948614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.948622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.948940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.948948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.949251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.949258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.949552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.949559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.949777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.949785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.950092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.950099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.950282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.950289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.950648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.950655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.950933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.950940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.951242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.951248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.951418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.951425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.951578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.951585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.951970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.951978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.952304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.952311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.952621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.952945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.952952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.953276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.953283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.953567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.953574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.953963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.953970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.954293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.954300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.954515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.954522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.954707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.954714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.955025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.955032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.955362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.955370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.955728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.955735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.956052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.956059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.956222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.956230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.465 [2024-12-05 13:35:17.956505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.465 [2024-12-05 13:35:17.956514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.465 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.956834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.956841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.957052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.957060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.957372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.957380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.957687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.957695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.957971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.957979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.958301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.958309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.958618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.958626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.958926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.958933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.959249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.959256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.959570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.959577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.959905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.959912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.960116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.960123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.960441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.960447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.960730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.960737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.960940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.960947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.961141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.961148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.961346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.961353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.961631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.961638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.961975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.961983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.962159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.962166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.962475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.962482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.962789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.962797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.963105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.963112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.963397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.963403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.963694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.963701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.963917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.963924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.964232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.964239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.964549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.964556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.964880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.964888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.965175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.965182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.965471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.965478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.965768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.965775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.966084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.966092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.966393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.966401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.966692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.966700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.967017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.967024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.967334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.967340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.466 [2024-12-05 13:35:17.967672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.466 [2024-12-05 13:35:17.967679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.466 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.967912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.967919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.968089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.968098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.968387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.968395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.968695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.968704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.969011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.969020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.969324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.969331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.969641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.969649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.969955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.969963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.970155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.970163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.970471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.970479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.970824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.970831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.971155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.971165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.971454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.971463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.971766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.971773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.972086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.972093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.972287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.972294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.972601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.972608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.972914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.972923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.973282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.973290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.973599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.973606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.973977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.973985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.974292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.974299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.974605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.974611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.974943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.974950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.975280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.975288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.975579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.975586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.975896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.975903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.976229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.976237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.976566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.976574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.976884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.976892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.977183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.977191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.977495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.977503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.977795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.977802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.978124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.978131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.978445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.978451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.978745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.978752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.979075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.979083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.467 [2024-12-05 13:35:17.979413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.467 [2024-12-05 13:35:17.979420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.467 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.979635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.979642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.979879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.979888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.980202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.980210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.980522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.980531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.980844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.981142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.981149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.981464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.981471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.981791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.981798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.982135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.982144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.982474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.982481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.982768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.982775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.983107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.983115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.983428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.983435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.983753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.983760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.984131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.984138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.984427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.984435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.984726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.984733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.985033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.985040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.985370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.985377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.985659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.985665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.985973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.985981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.986285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.986293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.986581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.986589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.986890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.986899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.987226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.987234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.987558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.987564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.987887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.987894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.988227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.988234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.988567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.988575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.988891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.988899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.989026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.989033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.989347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.989355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.989549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.989556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.468 [2024-12-05 13:35:17.989855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.468 [2024-12-05 13:35:17.989866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.468 qpair failed and we were unable to recover it. 00:30:55.469 [2024-12-05 13:35:17.990154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.469 [2024-12-05 13:35:17.990161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.469 qpair failed and we were unable to recover it. 00:30:55.469 [2024-12-05 13:35:17.990448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.469 [2024-12-05 13:35:17.990455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.469 qpair failed and we were unable to recover it. 00:30:55.469 [2024-12-05 13:35:17.990843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.469 [2024-12-05 13:35:17.990851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.469 qpair failed and we were unable to recover it. 00:30:55.469 [2024-12-05 13:35:17.991163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.469 [2024-12-05 13:35:17.991171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.469 qpair failed and we were unable to recover it. 00:30:55.469 [2024-12-05 13:35:17.991460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.469 [2024-12-05 13:35:17.991468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.469 qpair failed and we were unable to recover it. 00:30:55.469 [2024-12-05 13:35:17.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.469 [2024-12-05 13:35:17.991792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.469 qpair failed and we were unable to recover it. 00:30:55.744 [2024-12-05 13:35:17.992098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.744 [2024-12-05 13:35:17.992108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.744 qpair failed and we were unable to recover it. 00:30:55.744 [2024-12-05 13:35:17.992299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.744 [2024-12-05 13:35:17.992307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.744 qpair failed and we were unable to recover it. 00:30:55.744 [2024-12-05 13:35:17.992619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.744 [2024-12-05 13:35:17.992627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.744 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.992930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.992939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.993123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.993130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.993448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.993455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.993642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.993651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.993971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.993980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.994274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.994282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.994507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.994515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.994817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.994825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.995110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.995117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.995438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.995446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.995597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.995606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.995884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.995892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.996197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.996204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.996517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.996525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.996833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.996842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.997120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.997127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.997506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.997513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.997796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.997804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.998105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.998112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.998420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.998427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.998719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.998726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.999033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.999040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.999370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.999377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.999453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.999461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:17.999733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:17.999741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.000050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.000058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.000391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.000399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.000692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.000699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.001025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.001033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.001339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.001346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.001664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.001671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.001990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.001997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.002301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.002308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.002600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.002608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.002787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.002795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.003108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.003116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.003424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.745 [2024-12-05 13:35:18.003431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.745 qpair failed and we were unable to recover it. 00:30:55.745 [2024-12-05 13:35:18.003714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.003721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.004029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.004036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.004355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.004361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.004555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.004568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.004898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.004905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.005218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.005225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.005514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.005521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.005716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.005731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.005899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.005908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.006226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.006233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.006533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.006540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.006837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.006845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.007185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.007193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.007497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.007505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.007808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.007816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.008138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.008146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.008309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.008317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.008607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.008615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.008922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.008930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.009240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.009247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.009592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.009599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.009919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.009926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.010279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.010286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.010609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.010615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.010914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.010922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.011246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.011252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.011292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.011299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.011561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.011570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.011870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.011878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.012168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.012175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.012390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.012398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.012743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.012749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.013049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.013056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.013371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.013378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.013695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.013701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.014038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.014046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.014377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.014385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.014717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.746 [2024-12-05 13:35:18.014724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.746 qpair failed and we were unable to recover it. 00:30:55.746 [2024-12-05 13:35:18.015034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.015041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.015325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.015333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.015605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.015613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.015911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.015918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.016246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.016253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.016568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.016577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.016876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.016883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.017077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.017084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.017370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.017377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.017699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.017706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.017918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.017925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.018225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.018233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.018432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.018447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.018771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.018779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.019092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.019100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.019395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.019401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.019710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.019717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.019999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.020006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.020326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.020335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.020645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.020652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.020809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.020817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.020980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.020988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.021264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.021271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.021589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.021596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.021916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.021924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.022044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.022050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.022274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.022282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.022603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.022610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.022918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.022925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.023200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.023207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.023536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.023542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.023859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.023873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.024184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.024191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.024529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.024536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.024893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.024901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.025311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.025319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.025619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.025626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.025934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.747 [2024-12-05 13:35:18.025947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.747 qpair failed and we were unable to recover it. 00:30:55.747 [2024-12-05 13:35:18.026262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.026269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.026457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.026469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.026787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.026794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.027010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.027017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.027317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.027325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.027610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.027617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.027930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.027937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.028253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.028262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.028515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.028522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.028855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.028865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.029162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.029169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.029382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.029390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.029669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.029676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.029972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.029979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.030306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.030312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.030640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.030647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.030932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.030939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.031264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.031271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.031566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.031574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.031888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.031895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.032215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.032222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.032510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.032517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.032829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.032836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.033125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.033132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.033457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.033464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.033752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.033759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.033865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.033872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.034160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.034167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.034460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.034466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.034788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.034796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.035115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.035122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.035326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.035334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.035645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.035652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.035947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.035954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.036199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.036205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.036470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.036477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.036796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.036803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.037119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.748 [2024-12-05 13:35:18.037126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.748 qpair failed and we were unable to recover it. 00:30:55.748 [2024-12-05 13:35:18.037455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.037462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.037773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.037779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.037994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.038002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.038203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.038210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.038400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.038408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.038762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.038769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.039090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.039097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.039420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.039427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.039730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.039738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.040046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.040056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.040364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.040371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.040640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.040647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.041061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.041069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.041372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.041379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.041677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.041684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.041888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.041895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.042236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.042242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.042406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.042414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.042738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.042745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.043039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.043047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.043338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.043345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.043652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.043660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.043970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.043977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.044298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.044305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.044617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.044623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.044931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.044938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.045266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.045274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.045665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.045672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.045846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.045854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.046137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.046144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.046429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.046435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.046747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.046754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.749 [2024-12-05 13:35:18.047000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.749 [2024-12-05 13:35:18.047007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.749 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.047300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.047308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.047620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.047627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.047919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.047926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.048282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.048289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.048596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.048603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.048840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.048847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.049163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.049170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.049341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.049350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.049511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.049519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.049742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.049749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.050076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.050083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.050297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.050303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.050684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.050690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.051006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.051013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.051323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.051329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.051626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.051632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.051918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.051926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.052213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.052220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.052536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.052544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.052856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.052865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.053149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.053156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.053475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.053482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.053802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.053809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.054120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.054128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.054432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.054439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.054758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.054765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.055084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.055091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.055384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.055391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.055703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.055711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.055877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.055884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.056092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.056100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.056467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.056474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.056780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.056786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.057110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.057117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.057433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.057440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.057606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.057613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.057986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.057993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.750 [2024-12-05 13:35:18.058286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.750 [2024-12-05 13:35:18.058293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.750 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.058603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.058610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.058931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.058939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.059264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.059271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.059585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.059592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.059753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.059761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.060051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.060060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.060401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.060408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.060709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.060717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.061023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.061031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.061332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.061339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.061645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.061652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.061949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.061956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.062273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.062279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.062599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.062606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.062930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.062938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.063284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.063292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.063602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.063610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.063939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.063947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.064326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.064332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.064646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.064652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.064941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.064950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.065249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.065256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.065548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.065555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.065867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.065874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.066159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.066167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.066490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.066497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.066680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.066688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.066987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.066995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.067292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.067299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.067587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.067594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.067790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.067803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.068119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.068126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.068439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.068445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.068741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.068748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.069051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.069058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.069343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.069351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.069619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.069626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.751 [2024-12-05 13:35:18.069951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.751 [2024-12-05 13:35:18.069958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.751 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.070288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.070295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.070585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.070592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.070898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.070905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.071241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.071248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.071408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.071416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.071690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.071697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.071978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.071986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.072311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.072321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.072631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.072639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.072947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.072954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.073247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.073254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.073563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.073571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.073890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.073898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.074190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.074197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.074521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.074528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.074819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.074825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.075124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.075131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.075448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.075456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.075724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.075732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.076086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.076094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.076389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.076396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.076715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.076721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.077013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.077020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.077189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.077197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.077463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.077470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.077771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.077778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.078061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.078070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.078376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.078384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.078693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.078701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.079017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.079024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.079276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.079283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.079599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.079605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.079915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.079922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.080227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.080235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.080555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.080562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.080872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.080880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.081189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.081196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.752 [2024-12-05 13:35:18.081398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.752 [2024-12-05 13:35:18.081405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.752 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.081761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.081768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.082031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.082038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.082364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.082372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.082653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.082660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.082947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.082954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.083377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.083384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.083679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.083686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.084002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.084009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.084332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.084338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.084637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.084647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.084845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.084853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.085167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.085176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.085494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.085502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.085821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.085829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.086114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.086121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.086503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.086509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.086826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.086832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.087148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.087156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.087483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.087490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.087804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.087811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.088145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.088152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.088472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.088480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.088793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.088800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.089125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.089133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.089440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.089447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.089841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.089849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.090142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.090149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.090351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.090357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.090686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.090693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.090992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.090999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.091318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.091325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.091638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.091646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.091976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.091983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.092302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.092309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.092578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.092585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.092888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.092895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.753 qpair failed and we were unable to recover it. 00:30:55.753 [2024-12-05 13:35:18.093299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.753 [2024-12-05 13:35:18.093307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.093595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.093602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.093750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.093758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.093925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.093932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.094200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.094207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.094527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.094534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.094855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.094865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.095036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.095044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.095325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.095331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.095619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.095626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.095931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.095938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.096143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.096150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.096481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.096488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.096783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.096792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.097103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.097110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.097431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.097437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.097834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.097841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.098128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.098136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.098442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.098450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.098756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.098764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.099071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.099078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.099370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.099377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.099660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.099667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.099982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.099989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.100292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.100299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.100591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.100598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.100908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.100916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.101226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.101233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.101391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.101399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.101701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.101708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.102021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.102029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.102332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.102339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.102647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.102653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.102972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.102979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.103271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.103278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.754 [2024-12-05 13:35:18.103589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.754 [2024-12-05 13:35:18.103595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.754 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.103790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.103797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.104115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.104123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.104396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.104402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.104716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.104723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.105026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.105035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.105358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.105364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.105650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.105657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.105970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.105977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.106265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.106271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.106474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.106481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.106805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.106812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.107131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.107139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.107424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.107431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.107732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.107739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.108034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.108042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.108352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.108358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.108648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.108655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.108933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.108942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.109326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.109333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.109523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.109530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.109908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.109915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.110241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.110247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.110555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.110561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.110841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.110849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.111118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.111125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.111413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.111421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.111728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.111735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.112014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.112021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.112343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.112349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.112650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.112656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.112891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.112899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.113234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.113241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.113551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.113557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.113853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.113860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.114151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.114157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.114479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.114487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.114784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.114792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.114995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.755 [2024-12-05 13:35:18.115002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.755 qpair failed and we were unable to recover it. 00:30:55.755 [2024-12-05 13:35:18.115323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.115330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.115630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.115637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.115813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.115821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.116123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.116131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.116438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.116445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.116763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.116769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.117091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.117099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.117389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.117396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.117657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.117664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.117819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.117827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.118108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.118115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.118442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.118449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.118623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.118631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.118906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.118913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.119213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.119219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.119499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.119506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.119784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.119792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.120069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.120076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.120387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.120394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.120793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.120801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.121143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.121150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.121458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.121466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.121847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.121854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.122152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.122159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.122463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.122470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.122681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.122688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.122775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.122782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.123070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.123077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.123371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.123378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.123677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.123683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.123990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.123998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.124318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.124325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.124484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.124491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.124810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.124817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.125154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.125161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.125555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.125562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.125720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.125729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.756 [2024-12-05 13:35:18.126025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.756 [2024-12-05 13:35:18.126033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.756 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.126408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.126415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.126726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.126733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.127041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.127049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.127383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.127390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.127564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.127572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.127803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.127810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.128129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.128137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.128452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.128460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.128748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.128756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.129058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.129066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.129371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.129378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.129569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.129577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.129926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.129933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.130286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.130293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.130576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.130582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.130893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.130900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.131139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.131147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.131430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.131437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.131745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.131752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.132124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.132131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.132422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.132429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.132728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.132736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.133159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.133166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.133448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.133455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.133747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.133754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.134073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.134080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.134276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.134283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.134550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.134558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.134848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.134856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.135053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.135061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.135368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.135375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.135686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.135694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.136001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.136009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.136311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.136318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.136634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.136641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.136973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.136981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.137194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.137202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.757 [2024-12-05 13:35:18.137478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.757 [2024-12-05 13:35:18.137485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.757 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.137798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.137805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.138002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.138009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.138307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.138314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.138599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.138607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.138782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.138788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.138997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.139004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.139283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.139290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.139605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.139613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.139926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.139934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.140248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.140256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.140426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.140434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.140710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.140717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.141007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.141014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.141323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.141329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.141512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.141519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.141781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.141788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.141987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.141995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.142301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.142308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.142604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.142613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.142930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.142937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.143276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.143282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.143571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.143577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.143867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.143874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.144198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.144207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.144497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.144505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.144818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.144825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.145165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.145172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.145479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.145486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.145785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.145792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.145998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.146013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.146351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.146357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.146553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.146560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.146872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.146880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.758 qpair failed and we were unable to recover it. 00:30:55.758 [2024-12-05 13:35:18.147172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.758 [2024-12-05 13:35:18.147179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.147480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.147487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.147813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.147820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.148032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.148040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.148357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.148365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.148634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.148641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.148953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.148960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.149237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.149243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.149528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.149535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.149849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.149856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.150153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.150160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.150478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.150484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.150801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.150809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.151142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.151150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.151334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.151342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.151528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.151534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.151858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.151867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.152158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.152165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.152490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.152496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.152786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.152792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.153073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.153080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.153302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.153308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.153491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.153498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.153809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.153816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.154167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.154175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.154476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.154483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.154763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.154769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.155070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.155076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.155402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.155409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.155700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.155706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.156021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.156029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.156325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.156332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.156640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.156648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.156931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.156939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.157262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.157270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.157580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.157588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.157932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.157940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.158221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.158229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.759 [2024-12-05 13:35:18.158535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.759 [2024-12-05 13:35:18.158543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.759 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.158849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.158856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.159135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.159143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.159506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.159514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.159825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.159834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.160141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.160149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.160459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.160467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.160754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.160761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.161109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.161118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.161420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.161428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.161585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.161594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.161917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.161925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.162230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.162238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.162543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.162551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.162857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.162870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.163169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.163177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.163562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.163570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.163796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.163805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.163954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.163962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.164237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.164246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.164568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.164577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.164736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.164745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.165088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.165096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.165395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.165403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.165716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.165724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.166040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.166048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.166374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.166383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.166679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.166687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.166989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.166997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.167302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.167310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.167614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.167622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.167912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.167920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.168229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.168238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.168425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.168434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.168796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.168803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.169118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.169126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.169396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.169404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.169713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.169721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.760 [2024-12-05 13:35:18.170025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.760 [2024-12-05 13:35:18.170033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.760 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.170360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.170367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.170674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.170682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.170991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.170999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.171317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.171326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.171623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.171631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.171940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.171948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.172144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.172151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.172500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.172508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.172792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.172800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.173110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.173119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.173424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.173432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.173738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.173747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.174031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.174039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.174350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.174358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.174672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.174680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.174984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.174992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.175301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.175309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.175621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.175630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.175941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.175949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.176257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.176265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.176553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.176561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.176858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.176869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.177144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.177152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.177460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.177468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.177781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.177788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.178097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.178105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.178258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.178266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.178642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.178650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.178867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.178875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.179172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.179180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.179497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.179506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.179788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.179796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.180084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.180092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.180397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.180407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.180701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.180709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.181016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.181025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.181349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.181357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.761 [2024-12-05 13:35:18.181658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.761 [2024-12-05 13:35:18.181666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.761 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.181975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.181983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.182298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.182306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.182595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.182603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.182909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.182918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.183212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.183219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.183539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.183547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.183836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.183844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.184154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.184162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.184506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.184514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.184891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.184900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.185223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.185231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.185541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.185548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.185716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.185725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.185887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.185895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.186178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.186186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.186504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.186512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.186819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.186827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.187139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.187147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.187510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.187518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.187819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.187827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.188124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.188133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.188398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.188406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.188685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.188694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.188954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.188962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.189260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.189268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.189569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.189577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.189901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.189909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.190071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.190079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.190353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.190361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.190669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.190677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.191014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.191022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.191339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.191347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.191651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.191659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.191967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.191975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.192265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.192273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.192584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.192593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.192898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.192906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.193214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.762 [2024-12-05 13:35:18.193222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.762 qpair failed and we were unable to recover it. 00:30:55.762 [2024-12-05 13:35:18.193510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.193519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.193823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.193831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.194039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.194048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.194323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.194331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.194655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.194663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.194822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.194830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.195020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.195028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.195196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.195204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.195359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.195367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.195676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.195683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.196020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.196029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.196355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.196362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.196533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.196541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.196813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.196821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.197134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.197143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.197450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.197458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.197802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.197809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.198112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.198120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.198426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.198434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.198599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.198607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.198896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.198904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.199218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.199226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.199385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.199392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.199657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.199665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.199958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.199966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.200277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.200285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.200591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.200598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.200905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.200914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.201209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.201217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.201482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.201489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.201791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.201800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.202112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.202121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.202472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.202481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.202778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.763 [2024-12-05 13:35:18.202786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.763 qpair failed and we were unable to recover it. 00:30:55.763 [2024-12-05 13:35:18.203126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.203134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.203444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.203451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.203625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.203633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.203953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.203961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.204267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.204275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.204581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.204588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.204879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.204887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.205207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.205215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.205527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.205536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.205846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.205854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.206234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.206243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.206545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.206553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.206871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.206880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.207195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.207203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.207498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.207507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.207817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.207826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.208128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.208136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.208444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.208452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.208745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.208753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.209037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.209045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.209362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.209370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.209675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.209683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.209992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.210000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.210319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.210327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.210680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.210689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.210984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.210993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.211344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.211353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.211650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.211658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.211855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.211866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.212168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.212176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.212467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.212476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.212782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.212790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.213095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.213103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.213404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.213412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.213702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.213710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.214016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.214024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.214349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.214358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.214673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.764 [2024-12-05 13:35:18.214682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.764 qpair failed and we were unable to recover it. 00:30:55.764 [2024-12-05 13:35:18.215002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.215011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.215320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.215328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.215648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.215655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.215965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.215973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.216269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.216277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.216467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.216476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.216808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.216816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.217130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.217139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.217437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.217445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.217756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.217764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.218075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.218083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.218390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.218398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.218686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.218695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.219004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.219013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.219318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.219327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.219624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.219631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.219931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.219939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.220153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.220161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.220429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.220437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.220737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.220745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.220898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.220905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.221108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.221116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.221399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.221407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.221693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.221701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.222014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.222023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.222358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.222366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.222681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.222689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.222998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.223007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.223335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.223342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.223657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.223665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.223866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.223874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.224163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.224171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.224496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.224505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.224813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.224821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.225148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.225156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.225467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.225475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.225765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.225772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.225924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.225932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.765 qpair failed and we were unable to recover it. 00:30:55.765 [2024-12-05 13:35:18.226255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.765 [2024-12-05 13:35:18.226263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.226577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.226585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.226879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.226887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.227200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.227209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.227405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.227414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.227720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.227727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.227881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.227889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.228072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.228080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.228396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.228405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.228718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.228726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.229021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.229029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.229341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.229349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.229693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.229702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.229960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.229968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.230288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.230296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.230567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.230575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.230883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.230891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.231098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.231107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.231446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.231454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.231807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.231815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.232085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.232093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.232399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.232407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.232707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.232715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.233031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.233039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.233345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.233353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.233551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.233560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.233867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.233875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.234185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.234193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.234510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.234518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.234824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.234832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.235131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.235139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.235442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.235451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.235747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.235756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.236094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.236102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.236273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.236283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.236655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.236663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.236975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.236984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.237295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.237303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.766 [2024-12-05 13:35:18.237460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.766 [2024-12-05 13:35:18.237468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.766 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.237787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.237794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.238099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.238107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.238413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.238421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.238712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.238720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.239030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.239038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.239400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.239409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.239720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.239728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.240021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.240029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.240334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.240342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.240649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.240658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.240965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.240973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.241265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.241273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.241431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.241439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.241706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.241714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.242070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.242079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.242411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.242419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.242746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.242754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.243047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.243055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.243444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.243452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.243742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.243750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.244062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.244070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.244383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.244390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.244720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.244729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.245033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.245041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.245354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.245361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.245606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.245614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.245934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.245942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.246281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.246288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.246611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.246619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.246931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.246939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.247202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.247210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.247537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.247546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.247859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.247871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.248071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.767 [2024-12-05 13:35:18.248078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.767 qpair failed and we were unable to recover it. 00:30:55.767 [2024-12-05 13:35:18.248259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.248266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.248535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.248544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.248853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.248864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.249108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.249116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.249424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.249432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.249739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.249748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.250061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.250069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.250386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.250393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.250546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.250554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.250856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.250870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.251151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.251158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.251469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.251477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.251784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.251793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.252099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.252107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.252415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.252423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.252695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.252703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.253012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.253020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.253339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.253347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.253652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.253661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.253852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.253860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.254149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.254157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.254449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.254457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.254612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.254620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.254907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.254916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.255101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.255109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.255410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.255417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.255540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.255546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.255867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.255876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.256152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.256160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.256466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.256474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.256811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.256818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.257100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.257108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.257418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.257426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.257743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.257751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.257970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.257978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.258284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.258292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.258600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.258607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.258907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.768 [2024-12-05 13:35:18.258916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.768 qpair failed and we were unable to recover it. 00:30:55.768 [2024-12-05 13:35:18.259216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.259224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.259531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.259539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.259864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.259873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.260074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.260084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.260288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.260296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.260607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.260615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.260926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.260935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.261266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.261274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.261575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.261582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.261894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.261902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.262259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.262267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.262573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.262582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.262872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.262880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.263176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.263184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.263485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.263493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.263696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.263704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.264013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.264021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.264339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.264347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.264706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.264714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.264942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.264951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.265296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.265303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.265621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.265629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.265785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.265793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.266089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.266097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.266437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.266445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.266618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.266627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.266980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.266988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.267319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.267327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.267629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.267637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.267941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.267949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.268295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.268303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.268523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.268532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.268854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.268865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.269195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.269203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.269497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.269506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.269810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.269818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.270117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.270125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.270432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.270440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.769 qpair failed and we were unable to recover it. 00:30:55.769 [2024-12-05 13:35:18.270652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.769 [2024-12-05 13:35:18.270661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.270838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.270846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.271146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.271154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.271460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.271468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.271790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.271797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.272067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.272077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.272409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.272417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.272750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.272758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.273068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.273076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.273385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.273393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.273693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.273700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.273959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.273968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.274294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.274301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.274619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.274627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.274931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.274939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.275257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.275265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.275580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.275588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.275913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.275921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.276238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.276245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.276520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.276528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.276850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.276858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.277061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.277070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.277401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.277409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.277716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.277723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.278031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.278039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.278242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.278258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.278555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.278562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.278872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.278881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.279083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.279092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.279398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.279406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.279699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.279707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.280025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.280033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.280344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.280352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.280644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.280652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.280978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.280986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.281233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.281241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.281552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.281561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.281851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.281859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.770 [2024-12-05 13:35:18.282170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.770 [2024-12-05 13:35:18.282179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.770 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.282489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.282497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.282812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.282820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.283138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.283146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.283435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.283442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.283610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.283617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.283880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.283889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.284218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.284227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.284483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.284491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.284797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.284806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.285105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.285113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.285418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.285426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.285716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.285724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.285929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.285939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.286227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.286235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.286579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.286586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.286877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.286885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.287198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.287206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.287516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.287524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.287815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.287823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.288017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.288025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.288333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.288340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.288650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.288658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.288931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.288940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.289107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.289115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.289432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.289440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.289758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.289766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.290040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.290048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.290369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.290376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.290574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.290583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.290838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.290846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.291195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.291203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.291535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.291543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.291871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.291880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.292191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.292199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.292545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.292552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.292836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.292844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.293056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.771 [2024-12-05 13:35:18.293064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.771 qpair failed and we were unable to recover it. 00:30:55.771 [2024-12-05 13:35:18.293367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.293375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:55.772 [2024-12-05 13:35:18.293665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.293673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:55.772 [2024-12-05 13:35:18.293842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.293851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:55.772 [2024-12-05 13:35:18.294160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.294168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:55.772 [2024-12-05 13:35:18.294474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.294482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:55.772 [2024-12-05 13:35:18.294792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.294800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:55.772 [2024-12-05 13:35:18.294977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.294987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:55.772 [2024-12-05 13:35:18.295312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.772 [2024-12-05 13:35:18.295320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:55.772 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.295652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.295662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.295877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.295887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.296238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.296246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.296522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.296530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.296835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.296844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.297123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.297131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.297421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.297429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.297736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.297744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.298036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.298045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.298379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.298387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.298681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.298689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.299010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.299018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.299328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.299336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.299663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.299671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.299972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.299980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.300306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.300314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.300581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.300589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.300916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.300924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.301251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.301259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.301564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.301572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.301886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.301894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.049 [2024-12-05 13:35:18.302203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.049 [2024-12-05 13:35:18.302212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.049 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.302519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.302527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.302729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.302737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.302909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.302917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.303280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.303288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.303610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.303619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.303926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.303934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.304242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.304250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.304436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.304445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.304747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.304755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.304982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.304990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.305289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.305297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.305631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.305638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.305803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.305810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.306118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.306126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.306434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.306442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.306731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.306738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.307029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.307037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.307343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.307351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.307663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.307671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.307977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.307988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.308297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.308305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.308609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.308618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.308924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.308932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.309233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.309240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.309417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.309425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.309604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.309612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.309927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.050 [2024-12-05 13:35:18.309935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.050 qpair failed and we were unable to recover it. 00:30:56.050 [2024-12-05 13:35:18.310266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.310274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.310453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.310461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.310765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.310772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.311146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.311154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.311449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.311457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.311747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.311755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.312084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.312092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.312388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.312396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.312634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.312643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.312970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.312978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.313183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.313191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.313492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.313500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.313827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.313835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.314022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.314030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.314348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.314355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.314521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.314529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.314848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.314856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.315184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.315192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.315378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.315386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.315702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.315710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.315873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.315881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.316070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.316077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.316360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.316368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.316687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.316696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.317024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.317032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.317331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.317339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.317658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.051 [2024-12-05 13:35:18.317665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.051 qpair failed and we were unable to recover it. 00:30:56.051 [2024-12-05 13:35:18.317973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.317981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.318317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.318326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.318601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.318609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.318813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.318821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.319136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.319145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.319422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.319432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.319745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.319753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.320048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.320055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.320372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.320380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.320554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.320561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.320851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.320859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.321188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.321196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.321503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.321511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.321818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.321827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.322208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.322216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.322526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.322535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.322882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.322890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.323214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.323222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.323576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.323584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.323889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.323898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.324213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.324221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.324417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.324426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.324576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.324585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.324920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.324928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.325199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.325207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.325498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.325506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.325841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.325850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.052 [2024-12-05 13:35:18.326031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.052 [2024-12-05 13:35:18.326040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.052 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.326356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.326365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.326657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.326666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.326957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.326965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.327281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.327289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.327598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.327606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.327973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.327981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.328262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.328270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.328593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.328601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.328791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.328799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.329109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.329118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.329346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.329354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.329644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.329652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.329958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.329966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.330332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.330340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.330674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.330683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.330990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.330998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.331303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.331310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.331607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.331617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.331937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.331945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.332250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.332258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.332566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.332575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.332761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.332769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.333075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.333083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.333380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.333388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.333686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.333694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.333999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.334007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.334325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.334333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.053 qpair failed and we were unable to recover it. 00:30:56.053 [2024-12-05 13:35:18.334672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.053 [2024-12-05 13:35:18.334679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.334994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.335002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.335313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.335321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.335614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.335622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.335936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.335945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.336245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.336253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.336585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.336593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.336929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.336937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.337241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.337249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.337409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.337417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.337715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.337723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.338056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.338064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.338249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.338258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.338552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.338560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.338879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.338887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.339215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.339222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.339528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.339536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.339843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.339851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.340142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.340150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.340442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.340451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.340771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.340779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.341094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.341103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.341399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.341408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.341695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.341704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.341998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.342006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.342338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.342345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.342619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.342627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.342953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.342961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.054 qpair failed and we were unable to recover it. 00:30:56.054 [2024-12-05 13:35:18.343267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.054 [2024-12-05 13:35:18.343275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.343582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.343590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.343884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.343893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.344215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.344222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.344527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.344535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.344844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.344853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.345151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.345159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.345489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.345498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.345798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.345807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.346110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.346118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.346413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.346421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.346707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.346715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.347023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.347031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.347190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.347198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.347497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.347504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.347794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.347803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.348107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.348115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.348422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.348430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.348722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.348730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.349020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.349028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.349335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.349343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.349652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.349660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.349971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.055 [2024-12-05 13:35:18.349980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.055 qpair failed and we were unable to recover it. 00:30:56.055 [2024-12-05 13:35:18.350286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.350295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.350600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.350608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.350768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.350777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.351085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.351093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.351390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.351398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.351745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.351753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.352008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.352017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.352213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.352222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.352398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.352406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.352614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.352623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.352824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.352832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.353154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.353162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.353456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.353464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.353765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.353773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.354100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.354109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.354423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.354431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.354735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.354744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.355045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.355053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.355361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.355368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.355665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.355675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.355834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.355842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.356142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.356150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.356460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.356468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.356731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.356739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.357030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.357038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.357350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.357359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.357750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.357759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.358068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.358076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.358385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.056 [2024-12-05 13:35:18.358393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.056 qpair failed and we were unable to recover it. 00:30:56.056 [2024-12-05 13:35:18.358704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.358712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.359018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.359026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.359348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.359355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.359654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.359662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.359950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.359958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.360277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.360284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.360630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.360638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.360853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.360864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.361166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.361174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.361487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.361495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.361804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.361812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.362114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.362122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.362451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.362459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.362767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.362776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.363080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.363088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.363422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.363430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.363758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.363765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.364046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.364057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.364354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.364362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.364654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.364662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.364953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.364961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.365269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.365277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.365649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.365657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.365928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.365936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.366219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.366227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.366532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.366539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.366850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.366857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.367173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.057 [2024-12-05 13:35:18.367181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.057 qpair failed and we were unable to recover it. 00:30:56.057 [2024-12-05 13:35:18.367364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.367373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.367683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.367691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.368000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.368009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.368224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.368232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.368401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.368410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.368725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.368733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.369042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.369049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.369340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.369348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.369509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.369516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.369800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.369807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.370116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.370124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.370281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.370290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.370653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.370660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.370967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.370975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.371277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.371285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.371596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.371604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.371942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.371951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.372260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.372577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.372584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.372880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.372888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.373183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.373191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.373496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.373505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.373807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.373815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.374007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.374016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.374317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.374324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.374619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.374627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.374797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.374805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.375090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.058 [2024-12-05 13:35:18.375099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.058 qpair failed and we were unable to recover it. 00:30:56.058 [2024-12-05 13:35:18.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.375266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.375543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.375553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.375869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.375877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.376179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.376187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.376476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.376484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.376766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.376774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.377080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.377088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.377375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.377383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.377680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.377688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.377996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.378004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.378318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.378325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.378627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.378635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.378829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.378837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.379143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.379151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.379457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.379465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.379789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.379797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.380118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.380126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.380285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.380294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.380597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.380605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.380899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.380907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.381239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.381247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.381549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.381556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.381865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.381874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.382163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.382171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.382475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.382483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.382798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.382806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.383114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.059 [2024-12-05 13:35:18.383122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.059 qpair failed and we were unable to recover it. 00:30:56.059 [2024-12-05 13:35:18.383484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.383492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.383778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.383786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.384096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.384104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.384407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.384415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.384574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.384582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.384886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.384894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.385168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.385176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.385359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.385368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.385669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.385676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.385996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.386005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.386316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.386323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.386620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.386627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.386929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.386937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.387109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.387118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.387422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.387431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.387729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.387737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.387897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.387906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.388199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.388207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.388516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.388524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.388795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.388803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.389094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.389102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.389385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.389393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.389699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.389706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.389910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.389918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.390216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.390223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.390511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.390519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.390821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.390828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.391092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.391101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.060 qpair failed and we were unable to recover it. 00:30:56.060 [2024-12-05 13:35:18.391388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.060 [2024-12-05 13:35:18.391396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.391687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.391695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.392002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.392010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.392309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.392317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.392629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.392637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.392941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.392950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.393237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.393245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.393549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.393557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.393900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.393909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.394314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.394321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.394628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.394636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.394958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.394966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.395312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.395320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.395539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.395547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.395836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.395844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.396143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.396151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.396482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.396490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.396823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.396831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.397049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.397058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.397380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.397388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.397568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.397576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.397852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.397860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.061 [2024-12-05 13:35:18.398200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.061 [2024-12-05 13:35:18.398209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.061 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.398521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.398529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.398682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.398690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.399033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.399041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.399356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.399365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.399690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.399699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.399989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.399997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.400304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.400312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.400496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.400505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.400822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.400830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.401135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.401143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.401443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.401451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.401762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.401770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.401939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.401947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.402347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.402355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.402679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.402688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.402997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.403006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.403326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.403334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.403622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.403630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.403787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.403797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.404116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.404124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.404434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.404443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.404768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.404778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.405112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.405121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.405425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.405433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.405730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.405738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.406077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.406085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.406400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.406408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.406588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.062 [2024-12-05 13:35:18.406597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.062 qpair failed and we were unable to recover it. 00:30:56.062 [2024-12-05 13:35:18.406882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.406890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.407198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.407205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.407541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.407549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.407846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.407853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.408130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.408138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.408468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.408476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.408694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.408701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.409022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.409030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.409352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.409361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.409687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.409696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.409997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.410005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.410328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.410336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.410645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.410653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.411003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.411011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.411326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.411334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.411636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.411647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.411947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.411956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.412272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.412281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.412584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.412593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.412906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.412914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.413110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.413118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.413430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.413437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.413721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.413729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.414036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.414044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.414325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.414333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.414620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.414629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.414917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.414926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.415202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.063 [2024-12-05 13:35:18.415210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.063 qpair failed and we were unable to recover it. 00:30:56.063 [2024-12-05 13:35:18.415516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.415524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.415819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.415827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.416130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.416138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.416658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.416673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.416994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.417003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.417316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.417324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.417614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.417622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.417931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.417940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.418109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.418118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.418427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.418434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.418726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.418734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.418776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.418784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.419068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.419077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.419394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.419403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.419734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.419743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.420084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.420092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.420343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.420352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.420660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.420668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.420970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.420978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.421161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.421173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.421488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.421496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.421805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.421812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.422016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.422025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.422345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.422354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.422644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.422652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.422931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.422939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.423254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.423262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.423551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.423565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.064 qpair failed and we were unable to recover it. 00:30:56.064 [2024-12-05 13:35:18.423856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.064 [2024-12-05 13:35:18.423868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.424149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.424159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.424468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.424477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.424772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.424780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.425080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.425089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.425351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.425359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.425669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.425677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.425968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.425977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.426287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.426295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.426479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.426488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.426814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.426822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.427040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.427049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.427364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.427372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.427549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.427557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.427889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.427897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.428223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.428231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.428536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.428544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.428824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.428832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.429012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.429021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.429183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.429191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.429515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.429524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.429848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.429857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.430165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.430173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.430531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.430539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.430918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.430927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.431248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.431256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.431440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.431449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.431734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.431741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.432032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.432040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.065 qpair failed and we were unable to recover it. 00:30:56.065 [2024-12-05 13:35:18.432338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.065 [2024-12-05 13:35:18.432346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.432618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.432626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.432930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.432938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.433276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.433284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.433566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.433574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.433883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.433891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.434201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.434210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.434498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.434507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.434804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.434813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.435106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.435114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.435453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.435462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.435779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.435787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.435946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.435955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.436229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.436237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.436545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.436553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.436844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.436852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.437170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.437178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.437486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.437494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.437852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.438159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.438167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.438455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.438463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.438663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.438671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.438968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.438976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.439298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.439307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.439617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.439625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.439805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.439814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.440162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.440170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.440480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.066 [2024-12-05 13:35:18.440487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.066 qpair failed and we were unable to recover it. 00:30:56.066 [2024-12-05 13:35:18.440782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.440790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.440980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.440989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.441318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.441325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.441614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.441622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.441917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.441925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.442326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.442334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.442644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.442652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.442897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.442905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.443188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.443196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.443495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.443503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.443815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.443823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.443991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.444000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.444285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.444293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.444597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.444605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.444910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.444918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.445242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.445249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.445463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.445472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.445818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.445826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.446098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.446106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.446397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.446404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.446751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.446759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.447064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.447072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.067 qpair failed and we were unable to recover it. 00:30:56.067 [2024-12-05 13:35:18.447389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.067 [2024-12-05 13:35:18.447398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.447688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.447696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.448010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.448019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.448205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.448213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.448503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.448510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.448788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.448796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.449110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.449118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.449425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.449432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.449702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.449709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.450030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.450039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.450342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.450351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.450669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.450676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.450963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.450971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.451280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.451287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.451594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.451602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.451753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.451761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.451959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.451967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.452287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.452295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.452496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.452504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.452830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.452838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.453178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.453186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.453485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.453493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.453820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.453827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.454005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.454014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.454320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.454327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.454633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.454641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.454969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.454977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.455329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.068 [2024-12-05 13:35:18.455337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.068 qpair failed and we were unable to recover it. 00:30:56.068 [2024-12-05 13:35:18.455562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.455570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.455875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.455884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.456163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.456171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.456471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.456479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.456785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.456793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.457108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.457116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.457406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.457414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.457723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.457731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.457917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.457925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.458232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.458239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.458572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.458580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.458766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.458775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.459084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.459094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.459357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.459364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.459673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.459681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.459993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.460001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.460324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.460332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.460671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.460679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.460969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.460977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.461283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.461291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.461595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.461603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.461910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.461918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.462236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.462245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.462431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.462439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.462750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.462758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.463086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.463094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.463394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.463403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.463724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.069 [2024-12-05 13:35:18.463732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.069 qpair failed and we were unable to recover it. 00:30:56.069 [2024-12-05 13:35:18.464029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.464036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.464218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.464226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.464551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.464558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.464728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.464736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.465025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.465033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.465355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.465363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.465689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.465696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.466022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.466030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.466326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.466334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.466531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.466538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.466870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.466879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.467202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.467210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.467523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.467531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.467737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.467746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.468031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.468040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.468335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.468343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.468531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.468538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.468848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.468857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.469148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.469156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.469445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.469453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.469810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.469818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.470135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.470143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.470505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.470514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.470806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.470815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.471197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.471207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.471511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.471519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.471854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.070 [2024-12-05 13:35:18.471867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.070 qpair failed and we were unable to recover it. 00:30:56.070 [2024-12-05 13:35:18.472139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.472147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.472440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.472448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.472672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.472681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.472968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.472976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.473321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.473329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.473633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.473641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.473955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.473963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.474280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.474288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.474593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.474600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.474749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.474757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.475112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.475120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.475285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.475292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.475585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.475592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.475906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.475914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.476296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.476305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.476641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.476650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.476990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.476998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.477308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.477316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.477625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.477632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.477922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.477930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.478251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.478259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.478569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.478577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.478888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.478897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.479113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.479121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.479434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.479442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.479747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.479755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.480053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.480061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.480370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.480378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.480666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.071 [2024-12-05 13:35:18.480674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.071 qpair failed and we were unable to recover it. 00:30:56.071 [2024-12-05 13:35:18.480980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.480989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.481175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.481183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.481527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.481535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.481828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.481837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.482137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.482146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.482452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.482460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.482792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.482801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.483149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.483156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.483464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.483475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.483783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.483791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.484087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.484095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.484385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.484394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.484691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.484699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.484995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.485003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.485309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.485317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.485609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.485617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.485924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.485932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.486249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.486257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.486435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.486443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.486749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.486758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.487039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.487047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.487214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.487222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.487510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.487518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.487807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.487815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.488019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.488027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.488330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.488338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.488665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.488673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.072 [2024-12-05 13:35:18.488964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.072 [2024-12-05 13:35:18.488972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.072 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.489283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.489291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.489654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.489662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.489981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.489989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.490162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.490169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.490485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.490493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.490853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.490864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.491160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.491168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.491458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.491466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.491773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.491780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.491943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.491951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.492280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.492288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.492579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.492587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.492765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.492773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.493076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.493085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.493283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.493291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.493340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.493348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.493516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.493524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.493839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.493847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.494028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.494037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.494322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.494330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.494625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.494634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.494930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.494938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.495266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.495273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.495566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.495574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.495892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.495900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.496172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.496180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.073 [2024-12-05 13:35:18.496487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.073 [2024-12-05 13:35:18.496496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.073 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.496787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.496796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.497086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.497094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.497398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.497406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.497712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.497720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.498018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.498027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.498315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.498323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.498621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.498629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.498968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.498976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.499266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.499274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.499608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.499616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.499926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.499934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.500148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.500156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.500446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.500454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.500746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.500754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.501030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.501039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.501262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.501270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.501576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.501584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.501736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.501744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.502054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.502062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.502372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.502380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.502680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.502688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.503027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.503036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.074 qpair failed and we were unable to recover it. 00:30:56.074 [2024-12-05 13:35:18.503355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.074 [2024-12-05 13:35:18.503362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.503669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.503677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.503997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.504006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.504312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.504320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.504629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.504637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.504944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.504953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.505266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.505274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.505610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.505618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.505927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.505934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.506112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.506120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.506419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.506427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.506757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.506767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.507042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.507050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.507366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.507373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.507590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.507599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.507962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.507971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.508288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.508296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.508602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.508610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.508904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.508912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.509239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.509247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.509525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.509533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.509856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.509867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.510187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.510195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.510389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.510398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.510573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.510582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.510899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.510907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.511233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.511241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.511570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.511578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.075 [2024-12-05 13:35:18.511890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.075 [2024-12-05 13:35:18.511898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.075 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.512201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.512209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.512489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.512497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.512794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.512802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.513119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.513127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.513435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.513442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.513742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.513750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.514041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.514050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.514367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.514376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.514558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.514566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.514869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.514879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.515163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.515171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.515478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.515486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.515802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.515810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.516120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.516128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.516419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.516428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.516770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.516778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.517100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.517109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.517436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.517444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.517739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.517747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.518028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.518036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.518390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.518398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.518733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.518741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.519069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.519077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.519393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.519401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.519708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.519717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.520046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.520055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.520344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.076 [2024-12-05 13:35:18.520352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.076 qpair failed and we were unable to recover it. 00:30:56.076 [2024-12-05 13:35:18.520665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.520673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.520980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.520988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.521302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.521310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.521604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.521612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.521918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.521926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.522241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.522249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.522539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.522548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.522834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.522843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.523158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.523167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.523470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.523479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.523648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.523655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.523977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.523985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.524297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.524305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.524616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.524623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.524933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.524942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.525249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.525257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.525574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.525582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.525891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.525899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.526187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.526196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.526501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.526510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.526819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.526827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.527115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.527124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.527284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.527294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.527465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.527472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.527651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.527660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.527966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.527974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.528314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.528321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.528649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.077 [2024-12-05 13:35:18.528656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.077 qpair failed and we were unable to recover it. 00:30:56.077 [2024-12-05 13:35:18.528821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.528829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.529104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.529112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.529445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.529453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.529783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.529792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.530115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.530123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.530431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.530439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.530750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.530758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.531086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.531094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.531414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.531422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.531731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.531739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.532049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.532058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.532355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.532364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.532680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.532688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.532992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.533001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.533163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.533172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.533501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.533508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.533817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.533825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.534152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.534160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.534456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.534463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.534497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.534504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.534786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.534794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.534959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.534967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.535247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.535256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.535587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.535596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.535934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.535943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.536250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.536257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.536459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.536466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.536755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.078 [2024-12-05 13:35:18.536763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.078 qpair failed and we were unable to recover it. 00:30:56.078 [2024-12-05 13:35:18.537090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.537098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.537405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.537414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.537726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.537734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.538073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.538081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.538370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.538378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.538683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.538691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.538998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.539008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.539308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.539317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.539616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.539625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.539930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.539938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.540259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.540268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.540559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.540567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.540850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.540858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.541067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.541075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.541348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.541356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.541667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.541675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.541982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.541990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.542173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.542180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.542522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.542530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.542879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.542887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.543191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.543199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.543370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.543378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.543656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.543664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.543993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.544001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.544312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.544319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.544630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.544639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.544946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.544954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.545264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.079 [2024-12-05 13:35:18.545273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.079 qpair failed and we were unable to recover it. 00:30:56.079 [2024-12-05 13:35:18.545563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.545571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.545881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.545889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.546234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.546242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.546560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.546568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.546855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.546869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.547143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.547151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.547452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.547461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.547750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.547758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.547984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.547992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.548303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.548312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.548464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.548472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.548769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.548777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.549083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.549091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.549403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.549411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.549727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.549735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.549961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.549970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.550244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.550252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.550562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.550570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.550897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.550907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.551244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.551253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.551586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.551595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.080 [2024-12-05 13:35:18.551777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.080 [2024-12-05 13:35:18.551785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.080 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.552012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.552021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.552334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.552342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.552637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.552644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.552954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.552962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.553133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.553141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.553475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.553483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.553850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.553858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.554045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.554053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.554335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.554342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.554650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.554658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.554973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.554981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.555299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.555307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.555464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.555472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.555771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.555779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.556071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.556079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.556387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.556395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.556698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.556705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.556996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.557004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.557350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.557358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.557655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.557664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.557978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.557987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.558319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.558328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.558490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.558498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.558777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.558785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.559105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.559113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.559416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.559424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.559728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.559736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.081 qpair failed and we were unable to recover it. 00:30:56.081 [2024-12-05 13:35:18.560031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.081 [2024-12-05 13:35:18.560039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.560238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.560248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.560556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.560565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.560853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.560861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.561140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.561147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.561455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.561463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.561788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.561796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.562105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.562114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.562419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.562427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.562740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.562750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.563049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.563058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.563357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.563366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.563669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.563677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.563957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.563966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.564268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.564276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.564564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.564572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.564881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.564889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.565191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.565199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.565494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.565502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.565793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.565802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.565957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.565966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.566232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.566240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.566527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.566535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.566707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.566715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.566922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.566931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.567183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.567191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.567532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.567540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.567900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.082 [2024-12-05 13:35:18.567908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.082 qpair failed and we were unable to recover it. 00:30:56.082 [2024-12-05 13:35:18.568099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.568107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.568410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.568418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.568747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.568755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.569029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.569038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.569209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.569218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.569517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.569525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.569711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.569720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.570014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.570022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.570374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.570382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.570550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.570558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.570883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.570891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.571192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.571200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.571505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.571514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.571823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.571831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.572109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.572117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.572418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.572426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.572741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.572749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.573078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.573086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.573395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.573403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.573704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.573712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.573891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.573899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.574206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.574216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.574505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.574513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.574805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.574813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.575119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.575127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.575323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.575331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.575620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.575628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.083 qpair failed and we were unable to recover it. 00:30:56.083 [2024-12-05 13:35:18.575926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.083 [2024-12-05 13:35:18.575934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.576099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.576107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.576397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.576405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.576706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.576714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.577043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.577051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.577356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.577363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.577679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.577687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.577993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.578002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.578320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.578328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.578617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.578626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.578932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.578940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.579254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.579262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.579601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.579609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.579938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.579946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.580257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.580265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.580650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.580658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.580821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.580829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.581148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.581156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.581469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.581477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.581648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.581656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.581867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.581875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.582167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.582176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.582499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.582508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.582811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.582820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.583123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.583131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.583438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.583446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.583754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.583762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.584040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.584048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.084 [2024-12-05 13:35:18.584381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.084 [2024-12-05 13:35:18.584389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.084 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.584568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.584576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.584736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.584744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.584910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.584917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.585264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.585272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.585567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.585575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.585884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.585894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.586221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.586228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.586531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.586539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.586830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.586838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.587117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.587125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.587438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.587445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.587741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.587748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.587900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.587908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.588223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.588230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.588523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.588531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.588821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.588830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.589187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.589195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.589500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.589508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.589817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.589825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.590137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.590145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.590504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.590512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.590812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.590820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.591122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.591130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.591440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.591448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.591755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.591764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.591979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.591987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.592263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.085 [2024-12-05 13:35:18.592271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.085 qpair failed and we were unable to recover it. 00:30:56.085 [2024-12-05 13:35:18.592606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.592614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.592946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.592955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.593275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.593283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.593593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.593601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.593931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.593939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.594270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.594278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.597286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.597315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.597634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.597644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.598061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.598090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.598429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.598439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.598751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.598760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.086 [2024-12-05 13:35:18.599079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.086 [2024-12-05 13:35:18.599087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.086 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.599385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.599394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.599685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.599695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.600009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.600017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.600324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.600332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.600660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.600669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.600969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.601280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.601291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.601578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.601587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.601899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.601907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.602223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.602230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.602423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.602433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.602705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.602713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.603052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.603060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.603397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.603405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.603721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.603729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.603920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.603928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.364 [2024-12-05 13:35:18.604194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.364 [2024-12-05 13:35:18.604202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.364 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.604444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.604452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.604652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.604660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.604987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.604996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.605296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.605304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.605629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.605637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.605877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.605886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.606187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.606195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.606507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.606514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.606826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.606835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.607106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.607114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.607433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.607441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.607626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.607636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.607826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.607834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.608010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.608019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.608325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.608334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.608618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.608626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.608932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.608940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.609131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.609139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.609469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.609477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.609770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.609778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.610094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.610102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.610411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.610419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.610721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.610728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.610970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.610979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.611320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.611327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.611622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.611630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.611950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.611958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.612000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.612007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.612273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.612281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.612595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.612605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.612917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.612925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.613126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.613135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.613437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.613444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.613758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.613766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.614092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.614100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.614411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.614418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.614750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.614758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.365 [2024-12-05 13:35:18.614947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.365 [2024-12-05 13:35:18.614954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.365 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.615267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.615275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.615452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.615460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.615790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.615798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.615992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.616001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.616187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.616195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.616524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.616532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.616843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.616851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.617162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.617170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.617457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.617465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.617670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.617678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.617991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.618000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.618321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.618329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.618636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.618645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.619034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.619042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.619327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.619335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.619669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.619677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.619987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.619996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.620290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.620298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.620602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.620610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.620958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.620966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.621318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.621327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.621631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.621640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.621971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.621979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.622313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.622321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.622670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.622677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.622987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.622996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.623303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.623312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.623601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.623609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.623920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.623928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.624231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.624239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.624531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.624539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.624868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.624878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.625037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.625045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.625347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.625355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.625651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.625659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.625987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.625996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.626311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.626319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.366 qpair failed and we were unable to recover it. 00:30:56.366 [2024-12-05 13:35:18.626632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.366 [2024-12-05 13:35:18.626640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.626930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.626938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.627257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.627265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.627583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.627590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.627762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.627769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.628070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.628078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.628382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.628389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.628584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.628592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.628906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.628914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.629269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.629276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.629569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.629577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.629728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.629736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.630058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.630065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.630399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.630407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.630621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.630630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.630948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.630955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.631277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.631285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.631464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.631472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.631652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.631660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.631932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.631940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.632247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.632255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.632600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.632609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.632903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.632911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.633233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.633240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.633552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.633560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.633867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.633876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.634064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.634071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.634388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.634396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.634703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.634712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.635009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.635017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.635206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.635215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.635549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.635557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.635733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.635741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.636032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.636040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.636377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.636386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.636676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.636683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.636928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.636935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.637305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.367 [2024-12-05 13:35:18.637313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.367 qpair failed and we were unable to recover it. 00:30:56.367 [2024-12-05 13:35:18.637605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.637613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.637903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.637912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.638230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.638239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.638547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.638556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.638868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.638876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.639221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.639228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.639528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.639536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.639844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.639852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.640148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.640157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.640342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.640351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.640660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.640667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.640975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.640983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.641295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.641303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.641629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.641637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.641930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.641938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.642255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.642263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.642552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.642561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.642852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.642861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.643029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.643037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.643358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.643366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.643541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.643550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.643870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.643878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.644058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.644066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.644360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.644367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.644524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.644532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.644868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.644877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.645199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.645206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.645520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.645528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.645865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.645873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.646158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.646166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.646492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.646500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.646808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.646815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.647123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.647131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.647418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.647426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.647738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.647746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.648074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.648082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.648380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.648389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.648687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.368 [2024-12-05 13:35:18.648694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.368 qpair failed and we were unable to recover it. 00:30:56.368 [2024-12-05 13:35:18.649002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.649011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.649320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.649328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.649629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.649637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.649932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.649945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.650260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.650268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.650431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.650439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.650774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.650782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.651079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.651087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.651399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.651407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.651593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.651602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.651885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.651893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.652237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.652245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.652578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.652586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.652894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.652902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.653229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.653236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.653530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.653538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.653843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.653851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.654181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.654189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.654479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.654487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.654773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.654781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.655100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.655109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.655148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.655157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.655461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.655469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.655641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.655648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.656005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.656013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.656304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.656312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.656592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.656600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.656909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.656918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.657253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.657260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.657573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.657581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.657888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.657896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.369 [2024-12-05 13:35:18.658208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.369 [2024-12-05 13:35:18.658216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.369 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.658546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.658554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.658844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.658852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.659147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.659155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.659465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.659473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.659764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.659772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.660075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.660084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.660392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.660402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.660710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.660718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.661025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.661033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.661189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.661199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.661475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.661482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.661816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.661824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.662000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.662008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.662210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.662217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.662538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.662545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.662854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.662866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.663153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.663162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.663461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.663468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.663782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.663790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.664102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.664111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.664444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.664452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.664757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.664765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.664936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.664944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.665272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.665279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.665608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.665616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.665927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.665934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.666249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.666256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.666564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.666571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.666865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.666873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.667038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.667046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.667345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.667352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.667542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.667550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.667839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.667846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.668128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.668138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.668445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.668453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.668754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.668762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.668939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.668947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.370 [2024-12-05 13:35:18.669231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.370 [2024-12-05 13:35:18.669238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.370 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.669540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.669548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.669885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.669893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.670199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.670207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.670514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.670523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.670828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.670836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.671193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.671201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.671531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.671539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.671848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.671857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.672162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.672171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.672470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.672478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.672680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.672689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.672987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.672995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.673313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.673321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.673650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.673658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.674008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.674016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.674362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.674370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.674679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.674687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.675040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.675048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.675273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.675281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.675479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.675487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.675765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.675774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.676103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.676111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.676402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.676410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.676760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.676768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.677059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.677067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.677369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.677376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.677671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.677678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.677986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.677995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.678198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.678207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.678517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.678525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.678833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.678842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.679153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.679162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.679475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.679483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.679795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.679804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.679996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.680004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.680308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.680317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.680502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.680510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.371 [2024-12-05 13:35:18.680722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.371 [2024-12-05 13:35:18.680729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.371 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.680911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.680920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.681103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.681110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.681437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.681445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.681782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.681791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.682061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.682069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.682389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.682398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.682724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.682732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.683034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.683042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.683437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.683445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.683744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.683751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.684098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.684106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.684412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.684420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.684726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.684734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.685033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.685042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.685344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.685352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.685693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.685701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.685872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.685880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.686142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.686150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.686495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.686502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.686803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.686811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.687112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.687120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.687429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.687437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.687743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.687751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.688018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.688026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.688338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.688346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.688662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.688669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.688985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.688993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.689318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.689326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.689655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.689664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.689973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.689981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.690303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.690311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.690459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.690467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.690636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.690644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.690816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.690824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.691002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.691010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.691184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.691192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.691510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.691518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.372 [2024-12-05 13:35:18.691850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.372 [2024-12-05 13:35:18.691860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.372 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.692219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.692228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.692389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.692397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.692734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.692743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.693073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.693082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.693389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.693398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.693704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.693712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.694021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.694029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.694368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.694376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.694539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.694547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.694737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.694745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.695034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.695042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.695337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.695345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.695653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.695661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.695853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.695865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.696045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.696052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.696212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.696221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.696502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.696511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.696820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.696828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.697121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.697129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.697455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.697463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.697639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.697647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.697973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.697981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.698251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.698259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.698586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.698594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.698906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.698914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.699234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.699242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.699532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.699540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.699832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.699840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.700146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.700154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.700460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.700469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.700632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.700641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.700974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.700983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.701303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.373 [2024-12-05 13:35:18.701312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.373 qpair failed and we were unable to recover it. 00:30:56.373 [2024-12-05 13:35:18.701624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.701631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.701939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.701947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.702271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.702279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.702607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.702615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.702938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.702947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.703255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.703263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.703552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.703562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.703868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.703876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.704167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.704176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.704340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.704349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.704681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.704689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.704889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.704897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.705191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.705198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.705506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.705514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.705802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.705810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.706107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.706115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.706414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.706422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.706730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.706738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.707047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.707055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1134881 Killed "${NVMF_APP[@]}" "$@" 00:30:56.374 [2024-12-05 13:35:18.707381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.707390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.707700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.707708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:56.374 [2024-12-05 13:35:18.707997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.708006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.708324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.708332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:56.374 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.374 [2024-12-05 13:35:18.708652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.708660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.374 [2024-12-05 13:35:18.708979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.708988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:56.374 [2024-12-05 13:35:18.709305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.709313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.709641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.709649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.709814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.709822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.710140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.710149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.710459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.710467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.710893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.710903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.711227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.711235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.711549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.711557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.711706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.711715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.711910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.374 [2024-12-05 13:35:18.711918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.374 qpair failed and we were unable to recover it. 00:30:56.374 [2024-12-05 13:35:18.712260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.712269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.712577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.712585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.712866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.712874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.713166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.713174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.713365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.713372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.713641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.713649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.713931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.713940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.714118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.714125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.714474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.714483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.714669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.714678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.714940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.714949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.715288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.715296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.715646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.715655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.715977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.715986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.716177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.716186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.716501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.716510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1135912 00:30:56.375 [2024-12-05 13:35:18.716681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.716690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.716963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.716972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1135912 00:30:56.375 [2024-12-05 13:35:18.717176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.717185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1135912 ']' 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:56.375 [2024-12-05 13:35:18.717496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.717505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.375 [2024-12-05 13:35:18.717685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.717695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.375 [2024-12-05 13:35:18.717967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.717979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.718154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.375 [2024-12-05 13:35:18.718162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.375 [2024-12-05 13:35:18.718539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.718547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 13:35:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:56.375 [2024-12-05 13:35:18.718848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.718857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.719156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.719165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.719485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.719493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.719698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.719706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.719910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.719917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.720282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.720290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.720596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.720604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.720928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.720936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.721237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.721244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.375 qpair failed and we were unable to recover it. 00:30:56.375 [2024-12-05 13:35:18.721605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.375 [2024-12-05 13:35:18.721613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.721905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.721913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.722233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.722241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.722417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.722426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.722702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.722711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.722884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.722892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.723081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.723089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.723420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.723428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.723721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.723729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.723923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.723931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.724287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.724294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.724559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.724567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.724888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.724896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.725202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.725210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.725569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.725576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.725776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.725784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.726028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.726035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.726363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.726370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.726663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.726670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.727049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.727057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.727231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.727239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.727521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.727528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.727718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.727726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.728062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.728070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.728234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.728243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.728476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.728483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.728805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.728812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.728986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.728995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.729364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.729371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.729668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.729675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.730003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.730010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.730351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.730358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.730662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.730669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.730978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.730986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.731199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.731206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.731493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.731500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.731789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.731797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.731964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.376 [2024-12-05 13:35:18.731973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.376 qpair failed and we were unable to recover it. 00:30:56.376 [2024-12-05 13:35:18.732279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.732286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.732573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.732579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.732901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.732908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.733274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.733281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.733610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.733617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.733942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.733949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.734293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.734301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.734492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.734500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.734822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.734829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.735140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.735147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.735518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.735525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.735820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.735827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.736080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.736087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.736384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.736391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.736551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.736558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.736948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.736955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.737283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.737290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.737484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.737499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.737821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.737828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.738107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.738115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.738428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.738435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.738739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.738746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.739116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.739123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.739421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.739428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.739761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.739768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.739942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.739950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.740276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.740285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.740632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.740639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.740950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.740957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.741289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.741296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.741525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.741532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.741805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.741812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.742128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.742135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.742458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.742465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.742800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.742807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.743130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.743137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.743472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.743478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.743889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.377 [2024-12-05 13:35:18.743896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.377 qpair failed and we were unable to recover it. 00:30:56.377 [2024-12-05 13:35:18.744326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.744333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.744647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.744654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.744876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.744883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.745078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.745086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.745319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.745326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.745638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.745646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.745836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.745843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.746052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.746059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.746366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.746372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.746439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.746445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.746742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.746749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.746944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.746951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.747312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.747319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.747638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.747644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.747962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.747969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.748295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.748302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.748652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.748660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.748977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.748985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.749315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.749323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.749640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.749646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.749977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.749984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.750312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.750319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.750633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.750639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.750974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.750982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.751313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.751319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.751538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.751546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.751739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.751747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.752099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.752107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.752514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.752522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.752820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.752827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.753124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.753131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.753467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.753473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.753773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.753780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.378 [2024-12-05 13:35:18.754097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.378 [2024-12-05 13:35:18.754105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.378 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.754447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.754454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.754770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.754778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.755128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.755136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.755320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.755327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.755525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.755532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.755704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.755716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.756010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.756018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.756332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.756339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.756673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.756680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.757019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.757027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.757072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.757080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.757376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.757383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.757682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.757690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.757891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.757899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.758186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.758193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.758485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.758492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.758693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.758700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.758937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.758944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.759253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.759260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.759589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.759595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.759761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.759769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.760116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.760123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.760422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.760428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.760753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.760760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.760961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.760968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.761339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.761346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.761669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.761677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.761838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.761845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.762025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.762034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.762325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.762332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.762642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.762649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.762808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.762815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.763169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.763176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.763579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.763586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.763876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.763885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.764183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.764190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.764421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.764435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.379 [2024-12-05 13:35:18.764655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.379 [2024-12-05 13:35:18.764663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.379 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.764833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.764841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.765027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.765035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.765353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.765360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.765601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.765608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.765803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.765811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.765994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.766002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.766290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.766297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.766499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.766506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.766921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.766928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.766972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.766978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.767140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.767155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.767446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.767453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.767638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.767967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.767974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.768372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.768379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.768634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.768642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.768969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.768977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.769301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.769308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.769368] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:30:56.380 [2024-12-05 13:35:18.769418] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.380 [2024-12-05 13:35:18.769594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.769603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.769934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.769942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.770124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.770131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.770318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.770325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.770506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.770513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.770831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.770839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.771147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.771155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.771456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.771465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.771798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.771806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.772121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.772129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.772456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.772463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.772759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.772766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.773080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.773088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.773387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.773396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.773717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.773725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.774038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.774046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.774352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.774360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.774668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.774676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.380 [2024-12-05 13:35:18.774999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.380 [2024-12-05 13:35:18.775008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.380 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.775302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.775310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.775693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.775701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.776023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.776031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.776203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.776211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.776406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.776415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.776724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.776732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.777032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.777040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.777372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.777380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.777572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.777579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.777877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.777885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.778193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.778201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.778392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.778402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.778751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.778759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.779076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.779084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.779401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.779409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.779609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.779617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.779938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.779947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.780285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.780293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.780650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.780658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.780858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.780873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.781207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.781216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.781531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.781539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.781708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.781716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.782030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.782038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.782378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.782386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.782561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.782569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.782866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.782874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.783064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.783073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.783413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.783421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.783734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.783742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.784032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.784040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.784371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.784379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.784683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.784691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.785003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.785010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.785314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.785322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.785608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.785615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.785831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.785839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.381 [2024-12-05 13:35:18.785990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.381 [2024-12-05 13:35:18.785998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.381 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.786311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.786319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.786503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.786511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.786806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.786814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.787096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.787103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.787407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.787414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.787739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.787746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.788068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.788075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.788412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.788418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.788747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.788755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.789033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.789042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.789373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.789380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.789570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.789578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.789936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.789943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.790252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.790259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.790555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.790562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.790872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.790880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.791105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.791113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.791465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.791473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.791768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.791775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.792119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.792126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.792365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.792372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.792542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.792549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.792872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.792881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.793048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.793056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.793238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.793247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.793568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.793576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.793791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.793798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.794104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.794113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.794426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.794434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.794739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.794746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.794831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.794838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.795167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.795174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.795490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.382 [2024-12-05 13:35:18.795497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.382 qpair failed and we were unable to recover it. 00:30:56.382 [2024-12-05 13:35:18.795799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.795806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.796166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.796173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.796473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.796481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.796797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.796804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.797119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.797126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.797318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.797325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.797616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.797623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.797940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.797950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.798274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.798281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.798463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.798471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.798786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.798793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.798971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.798978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.799180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.799188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.799482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.799490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.799683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.799689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.799881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.799889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.800086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.800093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.800484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.800491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.800802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.800809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.801213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.801221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.801471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.801478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.801696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.801702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.801888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.801895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.802109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.802116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.802376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.802383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.802681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.802688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.802882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.802889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.803074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.803081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.803349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.803357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.803681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.803689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.804002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.804009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.804183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.804191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.804528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.804534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.804826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.804833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.805148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.805156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.805489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.805496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.805832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.805839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.806211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.383 [2024-12-05 13:35:18.806219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.383 qpair failed and we were unable to recover it. 00:30:56.383 [2024-12-05 13:35:18.806384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.806391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.806610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.806618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.806894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.806902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.807173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.807180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.807348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.807356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.807527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.807533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.807865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.807872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.808196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.808203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.808394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.808401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.808617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.808626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.808824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.808831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.809032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.809040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.809424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.809431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.809750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.809758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.810076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.810083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.810391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.810398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.810580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.810587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.810825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.810833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.811116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.811124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.811468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.811475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.811650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.811657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.811993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.812001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.812333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.812340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.812666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.812674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.813014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.813022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.813337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.813343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.813664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.813671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.813986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.813993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.814365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.814372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.814717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.814725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.814904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.814913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.815211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.815218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.815533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.815540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.815635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.815641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.815852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.815859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.816049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.816056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.816345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.816352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.816644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.384 [2024-12-05 13:35:18.816651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.384 qpair failed and we were unable to recover it. 00:30:56.384 [2024-12-05 13:35:18.816851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.816867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.817199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.817206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.817401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.817417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.817746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.817753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.818100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.818107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.818466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.818473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.818700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.818707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.818880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.818887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.819184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.819190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.819528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.819535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.819732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.819739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.819898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.819907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.820207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.820214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.820537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.820544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.820732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.820739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.821027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.821034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.821392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.821399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.821703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.821709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.822025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.822032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.822354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.822361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.822714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.822722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.823041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.823049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.823366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.823373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.823689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.823696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.823996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.824003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.824324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.824330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.824542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.824550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.824900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.824907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.825124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.825131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.825297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.825303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.825460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.825467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.825633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.825640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.825946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.825953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.826260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.826267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.826455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.826462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.826828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.826835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.827155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.827162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.827477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.385 [2024-12-05 13:35:18.827485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.385 qpair failed and we were unable to recover it. 00:30:56.385 [2024-12-05 13:35:18.827828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.827834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.828143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.828150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.828470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.828476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.828827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.828835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.829154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.829161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.829478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.829485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.829659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.829667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.829858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.829868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.830053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.830061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.830229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.830236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.830430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.830437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.830711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.830718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.831028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.831036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.831090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.831098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.831383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.831390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.831703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.831710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.832005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.832012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.832327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.832333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.832660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.832666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.832977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.832984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.833291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.833299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.833650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.833657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.833964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.833971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.834187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.834195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.834509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.834516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.834795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.834802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.835233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.835240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.835555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.835561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.835896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.835903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.836209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.836216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.836522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.836529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.836857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.836867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.837181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.837188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.837504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.837511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.837857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.837867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.838177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.838184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.838492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.838499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.838836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.386 [2024-12-05 13:35:18.838843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.386 qpair failed and we were unable to recover it. 00:30:56.386 [2024-12-05 13:35:18.839159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.839166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.839490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.839497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.839826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.839833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.840158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.840165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.840499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.840506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.840821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.840829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.841018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.841026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.841244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.841251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.841454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.841461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.841767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.841775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.842063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.842070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.842396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.842403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.842640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.842647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.842968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.842975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.843302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.843309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.843611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.843619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.843941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.843948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.844140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.844147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.844372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.844380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.844573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.844580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.844731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.844739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.845138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.845145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.845438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.845445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.845760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.845768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.846056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.846063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.846250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.846258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.846548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.846555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.846852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.846858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.847059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.847067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.847384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.847391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.847602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.847619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.847929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.847936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.848258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.848265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.848473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.387 [2024-12-05 13:35:18.848481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.387 qpair failed and we were unable to recover it. 00:30:56.387 [2024-12-05 13:35:18.848804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.848811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.849139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.849146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.849489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.849495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.849830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.849837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.850173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.850180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.850509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.850515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.850739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.850745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.851039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.851046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.851226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.851234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.851464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.851659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.851666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.851864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.851872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.852171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.852178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.852352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.852359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.852728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.852734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.852866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.852873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.853310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.853317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.853646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.853654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.853933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.853941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.854285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.854292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.854648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.854654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.854949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.854957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.855284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.855290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.855590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.855596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.855815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.855822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.856004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.856011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.856361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.856367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.856670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.856677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.857002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.857009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.857186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.857193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.857362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.857369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.857647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.857654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.857848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.857855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.858028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.858035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.858203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.858210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.858507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.858514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.858830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.388 [2024-12-05 13:35:18.858837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.388 qpair failed and we were unable to recover it. 00:30:56.388 [2024-12-05 13:35:18.859164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.859172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.859455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.859462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.859777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.859784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.860103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.860110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.860195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.860202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.860387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.860394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.860712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.860719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.861034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.861041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.861360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.861367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.861701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.861707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.861952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.861959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.862302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.862308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.862473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.862480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.862719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.862725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.863041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.863048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.863220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.863227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.863504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.863511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.863688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.863697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.864005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.864013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.864324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.864331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.864611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.864618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.865084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.865091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.865394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.865401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.865663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.865671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.865992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.866001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.866184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.866192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.866366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.866373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.866704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.866712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.867032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.867039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.867374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.867381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.867688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.867694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.867932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.867939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.868283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.868291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.868493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.868499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.868899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.868906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.869111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.869118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.869350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.869356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.869672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.389 [2024-12-05 13:35:18.869679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.389 qpair failed and we were unable to recover it. 00:30:56.389 [2024-12-05 13:35:18.869855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.869865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.870053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.870059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.870354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.870360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.870657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.870664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.871006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.871013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.871346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.871353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.871653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.871660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.871951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.871958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.872167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.872174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.872381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.872388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.872753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.872759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.872954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.872961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.873296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.873303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.873518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.873525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.873677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.873684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.873970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.873976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.874259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.874265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.874596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.874602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.874933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.874940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.875180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.875187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.875525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.875531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.875848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.875854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.876033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.876040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.876340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.876346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.876539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.876547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.876719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.876725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.876898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.876907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.876968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:56.390 [2024-12-05 13:35:18.877281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.877288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.877588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.877595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.877779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.878214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.878221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.878404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.878411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.878627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.878633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.878812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.878820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.879140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.879148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.879464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.879472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.879804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.879812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.390 [2024-12-05 13:35:18.880110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.390 [2024-12-05 13:35:18.880117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.390 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.880317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.880325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.880538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.880546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.880839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.880847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.881078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.881086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.881410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.881418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.881735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.881745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.881939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.881947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.882258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.882266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.882566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.882575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.882885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.882893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.883177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.883377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.883384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.883663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.883670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.883930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.883937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.884328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.884335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.884694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.884701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.885008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.885016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.885199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.885207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.885535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.885542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.885838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.885845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.886198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.886206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.886537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.886544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.886842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.886849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.887032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.887040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.887367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.887374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.887777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.887784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.888087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.888094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.888396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.888403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.888591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.888599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.888825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.888832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.889223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.889230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.889438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.889445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.889621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.889627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.391 [2024-12-05 13:35:18.889814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.391 [2024-12-05 13:35:18.889821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.391 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.890155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.890162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.890462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.890469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.890647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.890654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.891015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.891021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.891379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.891386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.891566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.891573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.891816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.891822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.891946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.891954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.892239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.892246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.892444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.892451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.892787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.892794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.892976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.892984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.893029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.893036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.893355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.893361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.893647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.893654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.893933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.893940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.894269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.894275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.894609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.894616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.894906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.894913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.895173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.895180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.895467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.895473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.895777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.895785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.896078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.896085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.896383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.896389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.896700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.896706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.896927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.896934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.897336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.897342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.897715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.897722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.898022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.898029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.898238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.898245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.898468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.898475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.898648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.898655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.899033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.899040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.899331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.899339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.899730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.899738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.900066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.900073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.900265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.900272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.392 [2024-12-05 13:35:18.900635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.392 [2024-12-05 13:35:18.900642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.392 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.900959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.900966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.901137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.901145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.901304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.901311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.901628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.901635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.901940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.901947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.902138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.902145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.902425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.902432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.902745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.902752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.903086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.903093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.903279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.903289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.903612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.903619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.903933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.903940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.904267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.904274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.904454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.904461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.904804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.904811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.905197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.905204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.905526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.905533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.905829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.905836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.906151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.906158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.906448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.906455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.906650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.906657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.907012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.907019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.907324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.907330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.907667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.907675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.908000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.908008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.908237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.908244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.908528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.908535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.908813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.908820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.909148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.909155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.909473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.909480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.909775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.909782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.909996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.910004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.910329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.910336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.910645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.910652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.910943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.910949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.911253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.911260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.911595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.911603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.393 [2024-12-05 13:35:18.911909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.393 [2024-12-05 13:35:18.911917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.393 qpair failed and we were unable to recover it. 00:30:56.394 [2024-12-05 13:35:18.912224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.394 [2024-12-05 13:35:18.912231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.394 qpair failed and we were unable to recover it. 00:30:56.394 [2024-12-05 13:35:18.912597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.394 [2024-12-05 13:35:18.912604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.394 qpair failed and we were unable to recover it. 00:30:56.394 [2024-12-05 13:35:18.912629] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.394 [2024-12-05 13:35:18.912657] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.394 [2024-12-05 13:35:18.912665] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.394 [2024-12-05 13:35:18.912671] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.394 [2024-12-05 13:35:18.912677] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.394 [2024-12-05 13:35:18.912935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.394 [2024-12-05 13:35:18.912943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.394 qpair failed and we were unable to recover it. 00:30:56.394 [2024-12-05 13:35:18.913177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.394 [2024-12-05 13:35:18.913183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.394 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.913504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.913513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.913740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.913749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.914058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.914067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.914125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:56.672 [2024-12-05 13:35:18.914291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.914299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.914283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:56.672 [2024-12-05 13:35:18.914417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:56.672 [2024-12-05 13:35:18.914418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:56.672 [2024-12-05 13:35:18.914666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.914676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.914897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.914905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.915091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.915098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.915404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.915411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.915712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.915720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.916063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.916071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.916281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.672 [2024-12-05 13:35:18.916289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.672 qpair failed and we were unable to recover it. 00:30:56.672 [2024-12-05 13:35:18.916608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.916615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.916931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.916940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.917270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.917279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.917590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.917598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.917771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.917781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.918047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.918055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.918373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.918381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.918553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.918560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.918944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.918952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.919121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.919130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.919433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.919441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.919749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.919757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.919825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.919832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.920166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.920174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.920343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.920351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.920663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.920671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.921000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.921008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.921179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.921187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.921465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.921473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.921775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.921783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.922080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.922089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.922381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.922390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.922578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.922588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.922785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.922793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.923037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.923045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.923236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.923245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.923432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.923440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.923762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.923771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.923979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.923987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.924270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.924278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.924587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.924595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.924691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.924697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.924929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.924937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.925258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.925269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.925603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.925612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.925926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.925934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.926208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.926217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.673 [2024-12-05 13:35:18.926380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.673 [2024-12-05 13:35:18.926388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.673 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.926690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.926698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.926999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.927007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.927329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.927337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.927513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.927520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.927570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.927578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.927897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.927906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.928239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.928247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.928418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.928426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.928762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.928770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.929147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.929155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.929317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.929324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.929513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.929522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.929923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.929931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.930266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.930275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.930589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.930599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.930778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.930786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.931092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.931100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.931409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.931417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.931728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.931736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.932033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.932041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.932356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.932364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.932558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.932567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.932888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.932897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.933101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.933109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.933447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.933455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.933763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.933772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.934099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.934108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.934277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.934285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.934649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.934657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.934866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.934874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.935192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.935200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.935514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.935522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.935796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.935805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.936114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.936123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.936437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.936445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.936806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.936817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.937200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.937209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.674 qpair failed and we were unable to recover it. 00:30:56.674 [2024-12-05 13:35:18.937511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.674 [2024-12-05 13:35:18.937519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.937834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.937843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.938155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.938164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.938339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.938347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.938678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.938686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.939000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.939009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.939207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.939214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.939548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.939556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.939889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.939897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.940223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.940231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.940532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.940541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.940819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.940829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.941141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.941149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.941310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.941318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.941651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.941660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.941969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.941978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.942157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.942164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.942214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.942222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.942535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.942543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.942766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.942775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.942961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.942968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.943264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.943271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.943648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.943656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.943939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.943947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.944272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.944280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.944446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.944454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.944505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.944512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.944674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.944681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.945049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.945057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.945385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.945392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.945704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.945712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.946023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.946031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.946361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.946369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.946681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.946689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.947003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.947012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.947326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.947335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.947655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.947664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.947821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.675 [2024-12-05 13:35:18.947831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.675 qpair failed and we were unable to recover it. 00:30:56.675 [2024-12-05 13:35:18.948123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.948135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.948458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.948467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.948774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.948783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.948960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.948968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.949314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.949322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.949489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.949497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.949842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.949850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.950163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.950171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.950473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.950481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.950812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.950820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.951139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.951147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.951328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.951336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.951639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.951647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.951975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.951983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.952390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.952398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.952698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.952707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.952891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.952899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.953278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.953287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.953596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.953604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.953794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.953803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.954129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.954138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.954469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.954478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.954796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.954804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.954984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.954992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.955224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.955232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.955546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.955554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.955920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.955929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.956275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.956283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.956590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.956598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.956919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.956927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.957132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.957140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.957443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.957451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.957621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.957629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.957931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.957939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.958127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.958136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.958302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.958310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.958605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.958613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.958933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.676 [2024-12-05 13:35:18.958941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.676 qpair failed and we were unable to recover it. 00:30:56.676 [2024-12-05 13:35:18.959108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.959115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.959279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.959286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.959596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.959606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.959654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.959662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.959926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.959934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.960270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.960278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.960590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.960598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.960782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.960790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.961022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.961031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.961354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.961362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.961520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.961528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.961852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.961860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.962222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.962230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.962538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.962546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.962723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.962732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.962954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.962962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.963263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.963271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.963458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.963466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.963787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.963794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.964032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.964040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.964227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.964236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.964572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.964579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.964899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.964907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.965093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.965102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.965411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.965418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.965584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.965592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.965901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.965909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.966116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.966124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.966426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.966434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.966779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.966787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.967083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.677 [2024-12-05 13:35:18.967091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.677 qpair failed and we were unable to recover it. 00:30:56.677 [2024-12-05 13:35:18.967130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.967137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.967432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.967440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.967759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.967767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.968098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.968107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.968286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.968294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.968601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.968608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.968899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.968908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.969233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.969242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.969525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.969533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.969720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.969728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.970010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.970018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.970342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.970352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.970669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.970677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.970986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.970995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.971317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.971325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.971630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.971638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.971988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.971996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.972314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.972322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.972637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.972645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.972822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.972831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.973135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.973143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.973334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.973343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.973495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.973504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.973805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.973814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.974128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.974136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.974328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.974336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.974659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.974667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.975019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.975027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.975197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.975204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.975534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.975541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.975838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.975846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.976021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.976030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.976366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.976374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.976676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.976684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.976858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.976869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.977220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.977228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.977421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.678 [2024-12-05 13:35:18.977428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.678 qpair failed and we were unable to recover it. 00:30:56.678 [2024-12-05 13:35:18.977746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.977754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.978080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.978089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.978404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.978412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.978574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.978583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.978769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.978778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.978985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.978993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.979317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.979325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.979616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.979624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.979832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.979840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.980160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.980168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.980490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.980498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.980674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.980683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.981032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.981040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.981368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.981376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.981684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.981693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.982007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.982015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.982302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.982310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.982625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.982633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.982809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.982818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.983148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.983156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.983320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.983328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.983648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.983655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.983878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.983887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.984213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.984221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.984412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.984421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.984471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.984479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.984835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.984842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.985148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.985156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.985472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.985480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.985773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.985781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.985947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.985955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.986335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.986343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.986516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.986524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.986818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.986826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.987090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.987098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.987409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.987416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.679 [2024-12-05 13:35:18.987731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.679 [2024-12-05 13:35:18.987739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.679 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.988122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.988130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.988294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.988302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.988577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.988586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.988759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.988767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.989074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.989082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.989412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.989420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.989747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.989755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.990034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.990042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.990357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.990365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.990693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.990701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.990873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.990881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.991100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.991109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.991410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.991418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.991732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.991739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.991909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.991917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.992245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.992253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.992453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.992460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.992788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.992798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.993114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.993122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.993272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.993280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.993580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.993588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.993866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.993874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.994190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.994198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.994472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.994479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.994793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.994801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.995108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.995116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.995433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.995441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.995607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.995616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.995798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.995806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.996113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.996121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.996396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.996404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.996720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.996728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.996900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.996908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.997082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.997089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.997277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.997284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.997451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.997459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.680 [2024-12-05 13:35:18.997613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.680 [2024-12-05 13:35:18.997622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.680 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.997973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.997981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.998144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.998151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.998344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.998352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.998669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.998678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.998975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.998983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.999315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.999322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.999640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.999648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:18.999873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:18.999883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.000078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.000087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.000288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.000296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.000575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.000584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.000904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.000913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.001235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.001243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.001422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.001430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.001606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.001615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.001775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.001784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.001949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.001957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.002165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.002174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.002486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.002494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.002804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.002813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.003240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.003247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.003464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.003472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.003803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.003811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.004119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.004127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.004430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.004438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.004750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.004758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 [2024-12-05 13:35:19.004904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.681 [2024-12-05 13:35:19.004912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd858000b90 with addr=10.0.0.2, port=4420 00:30:56.681 qpair failed and we were unable to recover it. 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Read completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Write completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Write completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Write completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Write completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Write completed with error (sct=0, sc=8) 00:30:56.681 starting I/O failed 00:30:56.681 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Read completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Read completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Read completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Read completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Read completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Read completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 Write completed with error (sct=0, sc=8) 00:30:56.682 starting I/O failed 00:30:56.682 [2024-12-05 13:35:19.005656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:56.682 [2024-12-05 13:35:19.006124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.006164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.006507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.006521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.006859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.006878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.007327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.007366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.007711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.007724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.008152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.008190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.008528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.008541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.008889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.008909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.009247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.009259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.009583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.009595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.009792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.009803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.009993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.010005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.010298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.010309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.010615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.010630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.010982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.010994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.011304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.011315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.011500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.011512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.011828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.011838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.012242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.012254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.012305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.012314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.012493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.012505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.012822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.012834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.013065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.013077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.013275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.013288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.013464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.013474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.013655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.013667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.013854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.013869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.014151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.014162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.014492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.014503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.014808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.014820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.015152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.015164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.015508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.682 [2024-12-05 13:35:19.015519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.682 qpair failed and we were unable to recover it. 00:30:56.682 [2024-12-05 13:35:19.015851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.015865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.016172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.016183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.016494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.016505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.016552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.016561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.016907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.016918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.017141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.017153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.017488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.017498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.017787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.017797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.018105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.018116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.018295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.018306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.018641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.018652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.019009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.019020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.019327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.019338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.019501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.019512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.019689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.019700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.019868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.019881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.020050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.020061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.020263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.020274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.020584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.020595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.020965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.020977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.021282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.021292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.021448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.021458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.021793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.021806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.022111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.022123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.022462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.022474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.022806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.022817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.023125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.023137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.023325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.023335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.023516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.023527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.023855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.023870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.024171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.024181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.024353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.024363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.024650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.024660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.024859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.024875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.025229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.025240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.025549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.025560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.025883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.025895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.026124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.026135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.683 qpair failed and we were unable to recover it. 00:30:56.683 [2024-12-05 13:35:19.026304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.683 [2024-12-05 13:35:19.026315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.026622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.026633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.026947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.026958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.027135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.027146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.027488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.027499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.027838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.027848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.028149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.028160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.028462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.028473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.028737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.028749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.029032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.029043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.029302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.029312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.029646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.029667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.029993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.030005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.030302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.030313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.030444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.030455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.030707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.030718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.031029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.031041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.031377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.031387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.031583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.031594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.031915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.031926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.032226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.032237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.032570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.032581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.032743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.032753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.033093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.033104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.033410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.033421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.033701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.033712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.034093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.034104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.034423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.034435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.034622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.034634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.034823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.034834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.035044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.035055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.035220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.035231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.035397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.035409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.035588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.035599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.035765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.035776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.036073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.036084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.036416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.036427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.036712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.036723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.037033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.684 [2024-12-05 13:35:19.037051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.684 qpair failed and we were unable to recover it. 00:30:56.684 [2024-12-05 13:35:19.037254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.037265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.037554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.037565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.037966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.037978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.038285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.038296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.038607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.038617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.038930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.038942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.039156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.039167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.039475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.039485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.039795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.039805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.040125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.040136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.040419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.040430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.040480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.040492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.040539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.040550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.040830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.040842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.041184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.041195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.041532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.041543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.041869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.041880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.042196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.042207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.042610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.042621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.042808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.042819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.043136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.043147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.043340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.043350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.043636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.043647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.043943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.043955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.044268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.044279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.044578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.044589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.044922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.044933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.045252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.045263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.045446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.045457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.045763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.045774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.045987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.045999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.046211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.046222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.046536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.046547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.046707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.046718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.047069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.047080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.047415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.047426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.047476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.047485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.047776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.685 [2024-12-05 13:35:19.047787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.685 qpair failed and we were unable to recover it. 00:30:56.685 [2024-12-05 13:35:19.048096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.048107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.048417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.048428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.048735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.048746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.049048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.049060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.049372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.049383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.049691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.049701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.049880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.049891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.050113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.050124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.050445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.050456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.050760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.050770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.050982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.050993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.051310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.051321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.051374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.051383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.051649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.051660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.051970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.051981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.052329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.052340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.052668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.052678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.052986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.052998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.053174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.053184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Write completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 Read completed with error (sct=0, sc=8) 00:30:56.686 starting I/O failed 00:30:56.686 [2024-12-05 13:35:19.053939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:56.686 [2024-12-05 13:35:19.054406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.054470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd860000b90 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.054877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.054911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd860000b90 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.055308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.055320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.055514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.055527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.686 [2024-12-05 13:35:19.055860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.686 [2024-12-05 13:35:19.055881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.686 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.056191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.056202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.056528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.056538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.056832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.056843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.057042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.057054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.057365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.057376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.057536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.057546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.057848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.057858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.058094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.058104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.058268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.058287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.058453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.058464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.058666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.058678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.058875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.058886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.059088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.059099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.059288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.059300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.059635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.059645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.059959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.059971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.060161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.060172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.060507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.060518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.060705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.060717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.060877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.060888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.061123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.061134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.061468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.061479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.061762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.061773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.062082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.062093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.062228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.062238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.062422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.062435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.062615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.062627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.062907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.062918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.063248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.063259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.063604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.063615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.063967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.063978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.064207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.064219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.064430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.064440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.064622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.064633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.064995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.065006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.065340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.065351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.065670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.065681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.066038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.687 [2024-12-05 13:35:19.066049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.687 qpair failed and we were unable to recover it. 00:30:56.687 [2024-12-05 13:35:19.066368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.066378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.066684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.066695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.066872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.066886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.067193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.067205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.067285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.067295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.067603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.067615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.067892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.067903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.068256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.068267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.068316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.068326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.068602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.068613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.068945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.068956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.069255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.069266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.069540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.069550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.069865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.069877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.070213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.070226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.070273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.070282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.070583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.070593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.070920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.070932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.071081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.071091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.071418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.071428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.071747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.071757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.071933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.071944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.072166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.072178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.072373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.072385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.072683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.072694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.073007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.073018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.073320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.073331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.073510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.073521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.073855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.073874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.074063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.074075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.074433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.074444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.074725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.074736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.075079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.075090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.075421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.075432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.075621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.075632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.075837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.075848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.076150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.076162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.688 [2024-12-05 13:35:19.076492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.688 [2024-12-05 13:35:19.076503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.688 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.076820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.076831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.077204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.077216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.077527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.077539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.077860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.077875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.078070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.078081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.078395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.078406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.078712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.078723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.079029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.079040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.079202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.079214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.079591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.079601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.079910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.079922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.080254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.080265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.080576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.080587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.080933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.080945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.081261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.081272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.081588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.081599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.081790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.081802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.082109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.082121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.082449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.082460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.082641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.082652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.082984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.082996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.083177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.083188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.083520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.083530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.083846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.083857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.084033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.084043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.084365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.084376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.084685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.084696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.084877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.084888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.085189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.085200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.085480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.085491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.085795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.085806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.086205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.086217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.086523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.086535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.086711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.086722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.086906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.086917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.087117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.087128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.087390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.087401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.087692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.689 [2024-12-05 13:35:19.087702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.689 qpair failed and we were unable to recover it. 00:30:56.689 [2024-12-05 13:35:19.087984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.087995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.088296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.088307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.088499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.088509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.088804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.088815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.089138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.089150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.089465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.089475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.089814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.089826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.090045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.090056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.090242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.090253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.090546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.090557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.090865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.090877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.091032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.091043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.091228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.091238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.091576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.091586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.091899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.091910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.092323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.092333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.092643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.092653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.092829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.092840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.093017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.093029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.093321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.093331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.093545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.093556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.093735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.093746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.094043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.094055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.094359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.094370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.094677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.094688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.095006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.095017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.095326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.095337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.095556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.095567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.095859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.095874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.096089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.096100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.096412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.096422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.096757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.096769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.097079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.097090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.097389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.097401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.097752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.097763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.097812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.097821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.097870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.097881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.098197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.098208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.690 [2024-12-05 13:35:19.098514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.690 [2024-12-05 13:35:19.098525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.690 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.098709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.098721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.098890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.098901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.099182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.099193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.099514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.099525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.099717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.099728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.100053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.100064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.100374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.100384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.100695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.100706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.101014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.101026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.101350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.101360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.101661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.101672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.101976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.101988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.102323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.102335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.102675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.102685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.103023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.103034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.103239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.103250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.103534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.103544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.103893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.103903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.104299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.104310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.104602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.104612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.104927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.104938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.105111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.105121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.105453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.105464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.105745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.105756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.105973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.105985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.106279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.106289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.106623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.106634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.106974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.106985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.107306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.107316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.107652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.107662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.107993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.108004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.108188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.108199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.108408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.108419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.108760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.691 [2024-12-05 13:35:19.108770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.691 qpair failed and we were unable to recover it. 00:30:56.691 [2024-12-05 13:35:19.109026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.109038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.109347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.109358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.109670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.109680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.109988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.109999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.110345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.110356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.110550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.110561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.110749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.110759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.110935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.110946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.111156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.111167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.111455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.111466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.111781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.111793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.112113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.112124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.112455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.112466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.112746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.112756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.113084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.113095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.113433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.113444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.113664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.113674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.114000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.114012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.114322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.114333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.114518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.114529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.114868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.114878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.115208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.115218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.115403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.115414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.115576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.115586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.115671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.115680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.116001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.116012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.116197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.116208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.116510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.116521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.116706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.116721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.116891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.116903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.117236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.117247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.117426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.117436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.117754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.117764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.117955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.117967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.118298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.118308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.118629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.118640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.118824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.118835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.119028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.119039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.692 qpair failed and we were unable to recover it. 00:30:56.692 [2024-12-05 13:35:19.119371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.692 [2024-12-05 13:35:19.119382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.119576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.119587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.119899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.119909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.120214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.120225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.120424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.120436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.120748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.120759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.120926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.120937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.121215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.121226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.121546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.121556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.121828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.121839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.122251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.122262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.122442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.122454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.122729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.122739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.123073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.123084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.123234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.123245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.123419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.123429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.123776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.123787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.124100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.124113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.124424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.124435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.124774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.124785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.124968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.124980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.125260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.125271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.125542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.125553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.125734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.125745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.126068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.126079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.126394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.126405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.126723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.126734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.127069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.127080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.127391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.127402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.127709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.127719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.128038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.128049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.128386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.128397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.128700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.128712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.128939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.128950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.129273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.129284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.129569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.129580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.129736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.129746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.129824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.129834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.130019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.693 [2024-12-05 13:35:19.130031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.693 qpair failed and we were unable to recover it. 00:30:56.693 [2024-12-05 13:35:19.130197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.130207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.130558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.130569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.130873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.131204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.131215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.131534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.131545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.131889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.131902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.131997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.132007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.132293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.132303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.132493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.132504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.132808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.132819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.133144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.133155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.133452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.133463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.133731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.133742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.134071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.134083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.134375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.134386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.134658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.134669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.134854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.134869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.135146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.135156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.135366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.135377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.135575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.135588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.135939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.135950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.136271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.136282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.136577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.136587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.136900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.136912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.137242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.137253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.137557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.137567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.137879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.137890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.138086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.138097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.138405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.138417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.138752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.138763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.138945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.138957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.139319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.139331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.139634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.139644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.139821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.139832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.140014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.140026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.140195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.140205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.140613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.140623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.140927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.140938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.694 qpair failed and we were unable to recover it. 00:30:56.694 [2024-12-05 13:35:19.141255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.694 [2024-12-05 13:35:19.141266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.141574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.141584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.141955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.141966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.142308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.142318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.142506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.142517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.142847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.142857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.143176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.143188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.143496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.143507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.143847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.143858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.144177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.144188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.144493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.144504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.144807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.144818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.144867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.144879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.145168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.145179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.145496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.145508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.145820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.145832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.146159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.146171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.146514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.146526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.146723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.146733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.147051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.147062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.147375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.147386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.147683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.147694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.148022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.148034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.148366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.148377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.148500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.148510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.148714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.148726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.149039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.149051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.149376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.149388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.149695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.149707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.149895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.149908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.150241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.150252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.150560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.150571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.150885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.150896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.151226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.151237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.695 qpair failed and we were unable to recover it. 00:30:56.695 [2024-12-05 13:35:19.151604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.695 [2024-12-05 13:35:19.151614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.151926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.151942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.152241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.152252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.152633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.152644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.152684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.152693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.152963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.152975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.153304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.153315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.153629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.153639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.153971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.153982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.154167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.154178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.154492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.154503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.154812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.154823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.155155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.155166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.155486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.155497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.155811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.155822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.155994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.156006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.156312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.156322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.156631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.156642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.157005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.157016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.157352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.157363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.157538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.157549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.157885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.157898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.158269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.158280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.158536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.158547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.158883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.158895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.159199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.159210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.159531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.159542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.159855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.159877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.160208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.160221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.160533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.160544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.160853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.160868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.161166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.161177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.161512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.161523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.161828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.161839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.162226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.162237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.162550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.162561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.162849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.162860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.163240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.163251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.163523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.696 [2024-12-05 13:35:19.163534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.696 qpair failed and we were unable to recover it. 00:30:56.696 [2024-12-05 13:35:19.163845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.163856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.164199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.164211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.164407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.164419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.164747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.164759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.164891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.164902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.165166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.165177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.165491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.165502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.165543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.165552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.165871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.165882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.166221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.166232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.166381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.166392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.166703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.166713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.167023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.167035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.167224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.167234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.167573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.167584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.167898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.167910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.168086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.168097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.168427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.168438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.168811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.168821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.169118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.169129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.169445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.169455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.169656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.169817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.169828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.169919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.169931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.170321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.170332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.170658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.170669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.170975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.170986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.171032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.171041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.171340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.171351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.171657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.171668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.171985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.171997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.172341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.172352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.172637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.172647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.172945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.172957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.173222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.173233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.173545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.173557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.173851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.173864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.174214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.174225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.697 [2024-12-05 13:35:19.174526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.697 [2024-12-05 13:35:19.174536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.697 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.174836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.174846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.175025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.175037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.175314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.175324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.175640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.175651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.176010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.176022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.176240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.176251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.176559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.176570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.176828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.176838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.177150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.177162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.177497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.177509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.177824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.177836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.178149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.178163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.178490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.178503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.178809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.179130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.179142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.179462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.179474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.179656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.179668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.179850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.179865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.180184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.180197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.180516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.180527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.180818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.180828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.181136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.181148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.181463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.181474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.181780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.181790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.181965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.181976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.182162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.182172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.182504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.182515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.182832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.182843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.183171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.183183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.183476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.183486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.183801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.183815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.184154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.184166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.184342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.184353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.184667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.184679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.184890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.184902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.185229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.185240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.185426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.185437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.698 qpair failed and we were unable to recover it. 00:30:56.698 [2024-12-05 13:35:19.185747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.698 [2024-12-05 13:35:19.185758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.185933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.185945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.186285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.186295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.186471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.186483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.186667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.186679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.187015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.187026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.187326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.187337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.187666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.187677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.187974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.187988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.188301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.188312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.188522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.188533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.188844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.188856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.189193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.189204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.189509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.189521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.189875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.189887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.190222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.190232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.190572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.190583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.190908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.190919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.191237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.191248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.191611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.191623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.191933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.191944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.192261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.192273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.192615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.192626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.192937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.192949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.193136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.193146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.193443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.193455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.193621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.193631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.193783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.193793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.194111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.194122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.194312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.194322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.194644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.194655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.194944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.194955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.195254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.195266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.195602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.195613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.195916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.195928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.196264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.196278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.196612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.196623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.196929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.196941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.699 [2024-12-05 13:35:19.197234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.699 [2024-12-05 13:35:19.197244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.699 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.197551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.197562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.197752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.197763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.197977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.197989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.198283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.198294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.198456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.198468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.198660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.198671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.198995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.199006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.199051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.199060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.199369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.199380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.199743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.199754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.199989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.200000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.200318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.200329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.200508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.200520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.200857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.200873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.201188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.201198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.201561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.201572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.201752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.201765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.202063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.202075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.202395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.202406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.202800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.202811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.203125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.203137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.203427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.203439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.203623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.203634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.203690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.203702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.203866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.203878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.204171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.204181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.204365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.204377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.204671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.204682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.204860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.204875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.205162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.205173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.205450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.205461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.205795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.205805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.206154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.206165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.206491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.700 [2024-12-05 13:35:19.206502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.700 qpair failed and we were unable to recover it. 00:30:56.700 [2024-12-05 13:35:19.206708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.206720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.206917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.206930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.207254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.207265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.207575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.207587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.207776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.207789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.208103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.208114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.208306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.208319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.208637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.208648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.208961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.208972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.209334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.209345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.209664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.209675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.209915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.209927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.210309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.210624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.210636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.210933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.210944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.211270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.211282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.211575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.211586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.211925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.211937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.212308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.212321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.212632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.212643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.212953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.212964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.213251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.213262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.213454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.213466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.213756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.213768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.214113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.214125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.214412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.214423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.214777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.214789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.214975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.214987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.215318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.215330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.215669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.215681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.215996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.216009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.216330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.216341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.216531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.216543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.216864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.216875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.217195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.217206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.217563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.217574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.217749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.217762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.217939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.217951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.701 qpair failed and we were unable to recover it. 00:30:56.701 [2024-12-05 13:35:19.218228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.701 [2024-12-05 13:35:19.218238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.218550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.218560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.218867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.218878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.219190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.219200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.219474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.219485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.219538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.219549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.219845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.219856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.220164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.220175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.220483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.220495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.220811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.220822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.221132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.221143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.221463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.221474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.702 [2024-12-05 13:35:19.221766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.702 [2024-12-05 13:35:19.221777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.702 qpair failed and we were unable to recover it. 00:30:56.985 [2024-12-05 13:35:19.222063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.985 [2024-12-05 13:35:19.222075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.985 qpair failed and we were unable to recover it. 00:30:56.985 [2024-12-05 13:35:19.222384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.222396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.222583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.222595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.222894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.222905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.223238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.223249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.223562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.223573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.223763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.223776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.224101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.224112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.224428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.224439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.224753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.224765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.225086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.225097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.225470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.225482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.225794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.225805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.226156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.226167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.226472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.226483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.226768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.226779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.227103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.227114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.227298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.227309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.227476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.227486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.227528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.227536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.227715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.227727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.228027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.228038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.228343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.228354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.228671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.228682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.228903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.228914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.229136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.229148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.229404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.229414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.229728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.229738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.229898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.229910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.230081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.230092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.230292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.230304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.230498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.230509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.230832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.230844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.231179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.231190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.231365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.231377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.231547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.231557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.231877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.231889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.232199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.232210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.232516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.232527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.232683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.232694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.232878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.232889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.233179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.233189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.233511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.233522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.233840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.233851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.234194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.234206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.234280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.234290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.234493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.234504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.234709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.234722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.234889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.234901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.235074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.235084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.235365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.235376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.235684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.235695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.236037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.236049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.236367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.236378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.236681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.236692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.236988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.236998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.237044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.237338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.237349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.237646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.237658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.237992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.238003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.238340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.238351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.238536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.238547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.238852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.238866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.239220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.239231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.239424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.239436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.239614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.239626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.239910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.239922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.240252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.240263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.240576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.240587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.986 [2024-12-05 13:35:19.240921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.986 [2024-12-05 13:35:19.240933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.986 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.241256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.241267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.241578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.241589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.241772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.241783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.242078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.242089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.242406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.242418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.242601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.242612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.242930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.242941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.243354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.243365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.243684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.243695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.244012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.244024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.244337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.244348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.244567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.244579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.244770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.244782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.245103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.245115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.245466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.245477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.245655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.245666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.245839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.245852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.246240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.246252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.246450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.246461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.246797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.246809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.247123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.247135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.247457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.247468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.247770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.247781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.248073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.248085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.248418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.248430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.248733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.248745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.248889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.248902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.249207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.249218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.249343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.249353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.249555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.249566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.249743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.249754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.250067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.250080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.250248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.250260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.250593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.250604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.250926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.250937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.251273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.251284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.251467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.251479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.251693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.251703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.251918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.251930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.252237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.252247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.252564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.252575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.252763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.252774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.253133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.253144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.253453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.253465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.253808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.253819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.254148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.254159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.254514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.254525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.254887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.254898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.255229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.255240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.255571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.255582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.255774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.255786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.256138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.256150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.256429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.256441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.256630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.256641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.256923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.256934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.257285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.257297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.257611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.257622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.257927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.257939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.258228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.258241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.258430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.258441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.258619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.258630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.258929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.258940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.259280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.259291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.259480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.259490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.259800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.259811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.259985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.259997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.260333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.260345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.260535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.260545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.260850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.260860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.261184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.261195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.261526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.261537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.261854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.261867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.987 [2024-12-05 13:35:19.262229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.987 [2024-12-05 13:35:19.262241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.987 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.262468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.262480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.262802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.262814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.262998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.263009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.263297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.263308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.263694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.263705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.263996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.264007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.264350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.264361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.264687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.264697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.264996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.265007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.265358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.265370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.265645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.265656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.265969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.265981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.266274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.266286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.266364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.266375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.266681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.266691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.267025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.267039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.267375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.267386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.267548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.267560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.267840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.267851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.268147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.268158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.268357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.268368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.268610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.268620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.268791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.268803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.269133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.269145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.269433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.269444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.269758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.269770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.269822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.269833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.270116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.270128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.270429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.270441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.270789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.270800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.270972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.270983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.271297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.271308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.271636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.271648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.271973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.271984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.272285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.272296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.272465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.272477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.272911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.272923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.273259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.273270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.273468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.273479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.273799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.273810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.274024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.274036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.274278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.274289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.274475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.274486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.274682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.274692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.274852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.274868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.275069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.275080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.275269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.275280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.275556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.275568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.275643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.275654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.275855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.275870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.276045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.276057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.276347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.276358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.276666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.276677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.276876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.276891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.276942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.276952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.277206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.277218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.277556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.277568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.277843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.277853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.278183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.278194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.278500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.278512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.278562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.278573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.278855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.278870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.279167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.279177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.279493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.279505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.279734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.279745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.280085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.280097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.988 [2024-12-05 13:35:19.280258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.988 [2024-12-05 13:35:19.280270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.988 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.280598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.280608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.280919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.280931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.281241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.281252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.281467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.281478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.281784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.281795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.281973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.281985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.282283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.282294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.282622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.282633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.283042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.283053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.283403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.283413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.283590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.283601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.283935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.283946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.284327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.284338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.284532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.284545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.284733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.284745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.284917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.284928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.285290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.285301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.285628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.285639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.285822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.285833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.286110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.286122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.286532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.286543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.286723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.286734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.287055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.287066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.287380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.287390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.287662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.287673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.287998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.288010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.288334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.288345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.288655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.288666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.288984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.288995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.289328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.289340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.289525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.289537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.289838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.289850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.290032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.290044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.290386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.290398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.290700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.290711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.291029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.291041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.291231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.291244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.291427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.291438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.291766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.291777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.292060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.292071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.292386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.292397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.292778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.292791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.293120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.293132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.293441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.293453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.293766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.293778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.294110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.294121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.294430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.294441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.294745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.294758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.294948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.294960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.295266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.295277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.295470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.295482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.295811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.295822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.296159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.296171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.296510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.296521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.296857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.296872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.297105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.297117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.297300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.297311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.297612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.297624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.297799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.297810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.298003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.298015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.298352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.298363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.298647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.298658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.298978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.298990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.299194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.299206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.299520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.299531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.299878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.299890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.299941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.299951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.989 qpair failed and we were unable to recover it. 00:30:56.989 [2024-12-05 13:35:19.300245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.989 [2024-12-05 13:35:19.300256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.300565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.300576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.300881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.300893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.301194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.301206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.301512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.301523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.301814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.301826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.302143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.302154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.302496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.302508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.302808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.302819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.303092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.303104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.303293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.303305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.303593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.303604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.303914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.303925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.304113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.304124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.304403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.304415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.304729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.304740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.305059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.305070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.305371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.305382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.305700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.305711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.306056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.306068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.306430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.306441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.306751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.306762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.306967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.306977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.307145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.307156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.307505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.307516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.307830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.307842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.308185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.308196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.308383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.308395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.308716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.308728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.309037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.309049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.309178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.309189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.309314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.309394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd860000b90 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.309756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.309794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd860000b90 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.310112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.310147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd860000b90 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.310395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.310411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.310648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.310659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.310727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.310738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.310999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.311010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.311350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.311360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.311691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.311702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.311866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.311878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.312149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.312163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.312472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.312482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.312781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.312791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.312979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.312990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.313382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.313393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.313564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.313575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.313858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.313875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.314053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.314064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.314393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.314404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.314720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.314731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.315068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.315079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.315305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.315315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.315642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.315653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.315962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.315972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.316275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.316287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.316621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.316632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.316933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.316945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.317264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.317275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.317468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.317478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.317786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.317797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.318111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.318122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.318312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.318322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.318642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.318653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.318835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.318846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.319044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.319055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.319389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.319400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.319598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.319609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.990 [2024-12-05 13:35:19.319802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.990 [2024-12-05 13:35:19.319815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.990 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.320164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.320175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.320488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.320499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.320809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.320819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.321187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.321199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.321508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.321520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.321826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.321837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.322156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.322167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.322455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.322466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.322770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.322781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.322973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.322984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.323259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.323270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.323617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.323628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.323935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.323946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.324124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.324135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.324423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.324433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.324769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.324780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.325107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.325119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.325424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.325434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.325627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.325638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.325944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.325955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.326265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.326276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.326590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.326601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.326921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.326932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.327258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.327270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.327583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.327593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.327786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.327798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.328129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.328141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.328324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.328335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.328601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.328612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.328797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.328807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.329116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.329127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.329470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.329480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.329787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.329988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.330001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.330281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.330292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.330488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.330498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.330875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.330887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.331215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.331226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.331535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.331546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.331878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.331889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.332275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.332286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.332474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.332485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.332763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.332774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.332980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.332991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.333304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.333314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.333625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.333636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.333975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.333986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.334170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.334181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.334476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.334486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.334797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.334807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.335121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.335132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.335476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.335488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.335798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.335809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.336125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.336135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.336316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.336328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.336647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.336658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.336971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.336982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.337168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.337179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.337495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.337505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.337840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.337850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.338168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.338179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.338493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.338505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.338557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.338569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.338881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.338894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.339237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.339247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.339435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.339446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.339755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.339765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.340102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.340115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.340453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.340463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.340783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.340795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.341118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.341129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.341444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.341454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.341642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.341653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.991 qpair failed and we were unable to recover it. 00:30:56.991 [2024-12-05 13:35:19.341834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.991 [2024-12-05 13:35:19.341844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.342156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.342167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.342462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.342472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.342819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.342830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.343007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.343018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.343246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.343257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.343423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.343435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.343624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.343637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.343929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.343940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.344306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.344317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.344620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.344630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.344919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.344930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.345100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.345111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.345291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.345301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.345632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.345643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.345975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.345987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.346327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.346339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.346393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.346404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.346736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.346746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.347004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.347015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.347331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.347342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.347683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.347696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.347890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.347902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.348100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.348112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.348290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.348301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.348476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.348488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.348710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.348721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.348896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.348907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.349204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.349215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.349514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.349526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.349892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.349903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.350242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.350253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.350560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.350571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.350746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.350757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.351102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.351112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.351417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.351428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.351605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.351616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.351939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.351950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.352173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.352184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.352495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.352506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.352820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.352831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.353071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.353083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.353394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.353405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.353719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.353730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.354044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.354091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.354100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.354403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.354414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.354454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.354463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.354594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.354608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.354773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.354783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.355104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.355115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.355460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.355471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.355783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.355794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.355986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.355997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.356289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.356301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.356461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.356472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.356639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.356650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.356995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.357006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.357338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.357626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.357636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.357959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.357971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.358296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.358308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.358490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.358501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.358805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.358816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.359125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.359136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.359521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.359532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.359836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.359847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.360018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.360030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.360208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.360219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.360553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.360564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.360877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.360888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.361199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.361209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.361516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.361527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.361881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.361892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.992 qpair failed and we were unable to recover it. 00:30:56.992 [2024-12-05 13:35:19.362193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.992 [2024-12-05 13:35:19.362204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.362505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.362516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.362902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.362914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.363086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.363096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.363446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.363457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.363745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.363756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.363811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.363821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.364124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.364135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.364319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.364330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.364519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.364529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.364823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.364834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.365022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.365033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.365198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.365208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.365376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.365386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.365703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.365714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.366039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.366053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.366245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.366255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.366541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.366551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.366887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.366898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.367230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.367241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.367552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.367563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.367870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.367881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.368111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.368121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.368294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.368306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.368635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.368646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.368957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.368969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.369301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.369313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.369630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.369640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.369847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.369858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.370046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.370057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.370371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.370381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.370692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.370702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.371016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.371026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.371337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.371348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.371682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.371694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.372011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.372023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.372364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.372375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.372691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.372702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.372988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.373000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.373312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.373323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.373556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.373567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.373882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.373894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.374209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.374222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.374527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.374537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.374834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.374844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.375036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.375048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.375381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.375392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.375704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.375715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.375908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.375920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.376243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.376254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.376552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.376563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.376728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.376739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.377078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.377089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.377398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.377409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.377582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.377593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.377878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.377890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.378214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.378226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.378416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.378427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.378619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.378630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.378966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.378977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.379253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.379264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.379575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.379586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.379880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.379891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.380232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.380243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.380511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.993 [2024-12-05 13:35:19.380523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.993 qpair failed and we were unable to recover it. 00:30:56.993 [2024-12-05 13:35:19.380835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.380846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.381193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.381205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.381350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.381360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.381642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.381654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.381963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.381977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.382301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.382312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.382593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.382605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.382905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.382916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.383246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.383258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.383571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.383581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.383741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.383752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.383941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.383952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.384147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.384158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.384465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.384476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.384803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.384815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.385189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.385201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.385389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.385400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.385732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.385743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.386084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.386096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.386274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.386286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.386474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.386486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.386671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.386683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.386975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.386987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.387318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.387329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.387495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.387507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.387734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.387746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.387927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.387939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.388211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.388222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.388533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.388544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.388857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.388875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.389200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.389211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.389390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.389401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.389689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.389700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.390042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.390053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.390240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.390251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.390301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.390310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.390623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.390634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.390795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.390806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.391081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.391093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.391488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.391499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.391808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.391820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.392004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.392016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.392194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.392204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.392524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.392535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.392869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.392880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.393207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.393218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.393384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.393396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.393730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.393740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.393901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.393912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.393998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.394009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.394331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.394341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.394518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.394529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.394817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.394827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.395009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.395021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.395323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.395334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.395680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.395691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.396009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.396020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.396340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.396351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.396685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.396695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.397031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.397042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.397356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.397366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.397679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.397691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.398006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.398019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.398356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.398368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.398526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.398538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.398736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.398747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.399099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.399110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.399296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.399307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.399595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.399606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.399794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.399804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.400048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.400061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.400387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.400398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.400671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.994 [2024-12-05 13:35:19.400684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.994 qpair failed and we were unable to recover it. 00:30:56.994 [2024-12-05 13:35:19.400733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.400742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.401042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.401053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.401355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.401367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.401678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.401689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.401885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.401897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.402250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.402261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.402642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.402653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.402787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.402797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.403112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.403123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.403408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.403418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.403605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.403616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.403932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.403944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.404254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.404265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.404608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.404620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.404916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.404927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.405104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.405115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.405411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.405423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.405757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.405768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.405932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.405943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.406132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.406143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.406281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.406292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.406531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.406544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.406881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.406894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.407110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.407121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.407441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.407452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.407784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.407795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.408122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.408136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.408314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.408326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.408630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.408640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.409006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.409018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.409321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.409332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.409636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.409647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.409829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.409840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.410025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.410037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.410208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.410220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.410439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.410451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.410773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.410784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.410980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.410991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.411301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.411312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.411682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.411694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.412031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.412043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.412364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.412376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.412556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.412568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.412613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.412624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.412894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.412906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.413197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.413208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.413517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.413527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.413871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.413883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.414194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.414205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.414482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.414493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.414801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.414812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.414986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.414998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.415224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.415235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.415413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.415426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.415606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.415618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.415804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.415817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.416112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.416124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.416440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.416451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.416530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.416539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.416735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.416745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.416960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.416971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.417288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.417299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.417608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.417619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.417779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.417792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.418106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.418117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.418442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.418452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.418758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.418769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.418976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.418988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.419161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.995 [2024-12-05 13:35:19.419172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.995 qpair failed and we were unable to recover it. 00:30:56.995 [2024-12-05 13:35:19.419503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.419514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.419700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.419711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.419954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.419965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.420294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.420305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.420494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.420506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.420815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.420825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.421138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.421150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.421487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.421498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.421690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.421701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.421972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.421984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.422270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.422281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.422321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.422330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.422515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.422527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.422850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.422861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.423159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.423170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.423362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.423372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.423662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.423673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.423867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.423880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.424060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.424071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.424433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.424444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.424734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.424745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.425078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.425089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.425404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.425415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.425753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.425763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.426092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.426103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.426415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.426430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.426739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.426750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.427047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.427058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.427361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.427372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.427555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.427566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.427845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.427856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.428193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.428204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.428391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.428402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.428557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.428568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.428619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.428630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.428969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.428981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.429263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.429273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.429447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.429459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.429769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.429780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.430097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.430109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.430467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.430479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.430823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.430834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.431015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.431027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.431348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.431359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.431676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.431688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.431981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.431992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.432045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.432054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.432339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.432350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.432661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.432672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.433024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.433035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.433260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.433272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.433451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.433463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.433795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.433809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.434122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.434133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.434447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.434458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.434761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.434772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.435114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.435125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.435463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.435474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.435654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.435665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.435958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.435969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.436265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.436275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.436599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.436610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.436920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.436931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.437280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.437291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.437602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.437613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.437961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.437972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.438292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.438303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.438621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.438632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.438941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.438952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.439138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.439148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.439410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.439422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.439735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.439746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.996 [2024-12-05 13:35:19.439932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.996 [2024-12-05 13:35:19.439942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.996 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.440283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.440293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.440343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.440352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.440621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.440632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.440900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.440911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.441240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.441252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.441583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.441595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.441901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.441916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.442230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.442241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.442574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.442585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.442927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.442939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.443252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.443262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.443608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.443618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.443919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.443932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.444123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.444134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.444306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.444318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.444695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.444707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.445017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.445029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.445226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.445237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.445506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.445517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.445691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.445703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.445997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.446009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.446345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.446357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.446662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.446984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.446995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.447330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.447342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.447629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.447640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.447835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.447846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.448112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.448123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.448436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.448448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.448641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.448653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.448948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.448960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.449143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.449155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.449313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.449324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.449627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.449639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.449973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.449984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.450295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.450305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.450616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.450627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.450921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.450933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.451247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.451259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.451419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.451430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.451651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.451662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.451983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.451995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.452188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.452200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.452374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.452385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.452649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.452961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.452971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.453283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.453294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.453480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.453490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.453809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.453819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.454004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.454017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.454330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.454340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.454659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.454671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.454985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.454996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.455347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.455358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.455674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.455685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.455994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.456005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.456169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.456181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.456344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.456356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.456504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.456514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.456785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.456795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.457140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.457155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.457452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.457463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.457669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.457679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.458006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.458017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.458275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.458285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.458597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.458607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.458771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.458781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.459159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.459170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.997 [2024-12-05 13:35:19.459444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.997 [2024-12-05 13:35:19.459453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.997 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.459608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.459617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.459895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.459905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.460220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.460231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.460549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.460559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.460894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.460905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.461079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.461091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.461274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.461284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.461577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.461587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.461866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.461876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.462179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.462189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.462482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.462493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.462807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.462817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.463105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.463115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.463406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.463416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.463620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.463631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.463922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.463932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.464236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.464246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.464531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.464541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.464588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.464598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.464925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.464936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.465118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.465131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.465425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.465436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.465772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.465782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.466090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.466100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.466418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.466429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.466648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.466658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.466986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.466997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.467314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.467323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.467497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.467507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.467735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.467744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.467911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.467922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.468137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.468146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.468314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.468325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.468598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.468608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.468979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.468990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.469039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.469049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.469328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.469339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.469610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.469619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.469790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.469799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.469853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.469875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.470242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.470252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.470302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.470311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.470589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.470599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.470926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.470937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.471111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.471120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.471381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.471391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.471567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.471577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.471770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.471780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.472115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.472125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.472474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.472484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.472805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.472814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.473028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.473039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.473214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.473224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.473602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.473612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.473770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.473780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.474099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.474109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.474411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.474421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.474705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.474715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.475030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.475040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.475361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.475371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.475657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.475668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.475847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.475857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.476260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.476271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.476559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.476570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.476878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.476889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.477127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.477136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.477320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.477330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.477620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.477630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.998 [2024-12-05 13:35:19.477995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.998 [2024-12-05 13:35:19.478005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.998 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.478315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.478324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.478634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.478644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.478825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.478835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.479059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.479071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.479370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.479381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.479697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.479708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.480013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.480024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.480324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.480335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.480617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.480627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.480786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.480796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.481127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.481137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.481447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.481457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.481656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.481668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.482008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.482019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.482209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.482226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.482573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.482583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.482875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.482886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.483053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.483062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.483422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.483431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.483563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.483573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.483902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.483912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.484078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.484090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.484385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.484395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.484706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.484716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.485034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.485045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.485264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.485274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.485642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.485948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.485958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.486283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.486294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.486489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.486500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.486850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.486865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.487052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.487065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.487374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.487384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.487685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.487695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.487871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.487882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.488223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.488233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.488424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.488441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.488622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.488632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.488795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.488805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.488975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.488986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.489199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.489210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.489560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.489570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.489860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.489874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.490184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.490194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.490408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.490419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.490719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.490730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.491005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.491016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.491180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.491190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.491473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.491483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.491791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.491801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.491977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.491988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.492225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.492235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.492412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.492429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.492730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.492740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.493034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.493044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.493377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.493387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.493700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.493711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.494018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.494030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.494397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.494410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.494721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.494731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.495132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.495143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.495456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.495466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.495759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.495769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.496092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.496102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.496498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.496508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.496692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.496702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.496916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.496926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.497306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.497316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.497511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.497523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.497694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.497704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.498032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.498043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.498370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.498381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.498738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.498748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:56.999 qpair failed and we were unable to recover it. 00:30:56.999 [2024-12-05 13:35:19.498937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.999 [2024-12-05 13:35:19.498948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.499134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.499144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.499655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.499758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd854000b90 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.500199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.500290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd854000b90 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.500724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.500760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd854000b90 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.501075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.501088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.501422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.501432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.501729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.501739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.502042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.502053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.502225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.502234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.502569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.502578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.502882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.502892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.503245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.503258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.503555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.503566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.503756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.503767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.503937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.503949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.504263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.504274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.504569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.504579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.504847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.504858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.505047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.505058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.505238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.505248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.505563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.505573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.505883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.505894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.506071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.506083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.506370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.506380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.506673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.506683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.506965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.506975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.507292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.507302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.507463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.507473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.507858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.507874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.508039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.508050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.508417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.508427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.508736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.508747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.508931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.508942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.509149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.509160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.509446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.509456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.509777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.509786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.510111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.510122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.510414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.510424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.510722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.510732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.510897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.510908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.511198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.511208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.511538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.511547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.511840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.511850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.511900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.511910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.512196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.512206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.512507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.512518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.512851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.512866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.513196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.513207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.513531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.513541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.513877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.513889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.514060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.514070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.514322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.514333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.514687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.514698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.514883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.514895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.515068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.515078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.515433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.515444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.515772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.515782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.516101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.516112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.516282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.516291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.516583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.516594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.516910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.516922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.517327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.517337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.000 [2024-12-05 13:35:19.517551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.000 [2024-12-05 13:35:19.517562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.000 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.517885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.517897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.517944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.517955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.518145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.518155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.518361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.518372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.518694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.518705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.518890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.518901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.519236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.519247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.519408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.519418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.519726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.519736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.520020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.520031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.520243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.520253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.520605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.520616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.520912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.520923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.521265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.521276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.521565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.521576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.521962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.521973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.522217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.522229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.522561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.522572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.522909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.522921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.523078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.523089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.523393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.523403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.523734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.523745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.524136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.524147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.524454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.524464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.524757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.524767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.525060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.525071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.525447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.525457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.525645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.525657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.525891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.525901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.526266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.526276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.526591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.526601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.526927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.526938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.527237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.527247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.527435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.527453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.527687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.527699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.528023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.528034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.528351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.528362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.001 [2024-12-05 13:35:19.528587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.001 [2024-12-05 13:35:19.528598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.001 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.528776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.528787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.528963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.528980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.529187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.529197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.529500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.529510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.529665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.529675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.529909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.529922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.530088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.530100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.530272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.530282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.530598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.530608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.530927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.530938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.531105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.531115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.531481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.531491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.275 [2024-12-05 13:35:19.531804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.275 [2024-12-05 13:35:19.531815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.275 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.532137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.532148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.532443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.532454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.532583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.532593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.532883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.532894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.533193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.533203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.533512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.533522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.533737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.533748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.534039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.534050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.534341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.534351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.534661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.534671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.534838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.534847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.535217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.535227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.535524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.535534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.535877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.535888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.536220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.536230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.536398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.536409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.536735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.536745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.537040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.537050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.537424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.537434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.537619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.537630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.537819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.537829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.538132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.538142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.538488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.538498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.538809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.538819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.539215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.539226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.539570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.539580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.539878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.539889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.540225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.540234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.540417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.276 [2024-12-05 13:35:19.540426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.276 qpair failed and we were unable to recover it. 00:30:57.276 [2024-12-05 13:35:19.540660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.540670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.540983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.540993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.541321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.541332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.541742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.541751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.542032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.542042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.542371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.542381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.542655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.542664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.542962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.542973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.543272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.543281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.543586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.543597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.543910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.543920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.544120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.544129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.544419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.544429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.544657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.544666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.544970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.544981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.545279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.545289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.545585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.545595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.545889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.545900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.546220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.546229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.546648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.546658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.546977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.546988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.547160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.547170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.547341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.547351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.547733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.547743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.548048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.548059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.548253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.548262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.548315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.548325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.548487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.548497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.548813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.277 [2024-12-05 13:35:19.548824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.277 qpair failed and we were unable to recover it. 00:30:57.277 [2024-12-05 13:35:19.549018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.549028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.549379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.549389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.549623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.549632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.549916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.549927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.550292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.550303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.550600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.550610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.550821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.550831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.550881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.550891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.551095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.551105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.551415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.551425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.551750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.551759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.552083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.552093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.552417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.552428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.552766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.552777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.553083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.553093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.553279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.553289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.553645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.553655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.553945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.553955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.554272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.554282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.554488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.554497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.554646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.554656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.554707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.554717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.554899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.554909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.555333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.555343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.555514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.555524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.555859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.555872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.556171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.556181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.556427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.556436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.556775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.556784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.557095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.278 [2024-12-05 13:35:19.557111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.278 qpair failed and we were unable to recover it. 00:30:57.278 [2024-12-05 13:35:19.557434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.557445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.557784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.557794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.558104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.558115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.558425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.558435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.558731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.558740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.558910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.558920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.559218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.559228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.559517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.559527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.559840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.559849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.559914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.559924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.560244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.560254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.560416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.560426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.560666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.560676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.560885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.560896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.561219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.561229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.561589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.561598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.561762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.561772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.562051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.562062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.562132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.562141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.562416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.562425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.562830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.562840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.563151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.563161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.563473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.563483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.563695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.563704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.563930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.563940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.564325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.564335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.564668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.564680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.279 [2024-12-05 13:35:19.564889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.279 [2024-12-05 13:35:19.564899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.279 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.565211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.565220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.565529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.565539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.565747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.565757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.565934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.565943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.566262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.566272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.566595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.566604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.566780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.566790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.566992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.567002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.567303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.567312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.567492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.567501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.567874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.567886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.568090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.568099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.568254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.568264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.568617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.568627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.568915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.568926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.569233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.569242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.569429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.569439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.569507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.569518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.569829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.569839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.570138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.570150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.280 [2024-12-05 13:35:19.570539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.570549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:57.280 [2024-12-05 13:35:19.570871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.570883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.280 [2024-12-05 13:35:19.571093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.571103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.280 [2024-12-05 13:35:19.571413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.571423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.280 [2024-12-05 13:35:19.571553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.571563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.571842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.571852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.572211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.572221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.572512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.280 [2024-12-05 13:35:19.572522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.280 qpair failed and we were unable to recover it. 00:30:57.280 [2024-12-05 13:35:19.572919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.572929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.573131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.573140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.573349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.573359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.573535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.573546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.573803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.573814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.574127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.574138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.574186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.574197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.574465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.574475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.574785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.574796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.574977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.574988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.575280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.575290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.575458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.575469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.575655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.575666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.575828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.575840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.576200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.576211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.576373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.576383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.576740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.576750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.577060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.577070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.577231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.577241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.577516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.577526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.577873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.577885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.578103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.578114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.578416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.578427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.281 qpair failed and we were unable to recover it. 00:30:57.281 [2024-12-05 13:35:19.578717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.281 [2024-12-05 13:35:19.578728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.578948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.578958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.579301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.579310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.579643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.579654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.579984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.579995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.580201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.580211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.580362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.580373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.580547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.580556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.580729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.580739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.581035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.581045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.581243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.581253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.581540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.581550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.581735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.581747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.581959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.581969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.582168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.582178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.582463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.582473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.582652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.582663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.582991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.583002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.583316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.583327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.583658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.583669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.583875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.583886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.584106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.584116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.584416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.584426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.584740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.584751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.584957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.584968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.585414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.585425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.585704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.585715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.586068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.586079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.586299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.586308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.282 [2024-12-05 13:35:19.586589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.282 [2024-12-05 13:35:19.586599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.282 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.586922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.586933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.587296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.587307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.587607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.587618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.587786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.587796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.588115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.588125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.588418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.588428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.588726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.588736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.589033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.589044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.589417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.589428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.589731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.589740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.589901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.589913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.590208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.590219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.590404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.590416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.590489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.590500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.590702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.590712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.591017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.591027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.591364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.591374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.591584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.591594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.591940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.591952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.592303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.592313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.592470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.592480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.592878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.592889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.593180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.593190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.593499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.593509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.593690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.593700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.594065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.594076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.594382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.594394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.594735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.594744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.595047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.595058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.283 [2024-12-05 13:35:19.595361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.283 [2024-12-05 13:35:19.595372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.283 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.595414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.595423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.595737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.595746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.595939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.595956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.596220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.596229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.596411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.596421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.596718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.596728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.597032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.597043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.597367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.597378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.597702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.597712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.597883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.597894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.598303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.598313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.598491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.598501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.598916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.598928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.599110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.599121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.599411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.599421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.599711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.599722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.599911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.599922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.600330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.600340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.600634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.600644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.600991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.601002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.601325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.601337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.601642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.601653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.602067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.602079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.602286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.602296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.602604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.602614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.602908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.602919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.603242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.603253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.603421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.284 [2024-12-05 13:35:19.603430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.284 qpair failed and we were unable to recover it. 00:30:57.284 [2024-12-05 13:35:19.603780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.603790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.604116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.604126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.604439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.604449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.604622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.604631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.605000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.605011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.605244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.605254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.605574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.605586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.605904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.605916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.606124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.606135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.606423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.606433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.606752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.606763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.606977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.606988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.607170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.607179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.607503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.607514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.607819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.607829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.608183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.608194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.608481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.608492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.608779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.608789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.609094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.609105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.609432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.609442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.609491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.609500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.609805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.609817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.610047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.610058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.610404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.610415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.610705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.610715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.611028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.611039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.611205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.611216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.285 [2024-12-05 13:35:19.611428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.285 [2024-12-05 13:35:19.611439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.285 qpair failed and we were unable to recover it. 00:30:57.285 [2024-12-05 13:35:19.611641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:57.286 [2024-12-05 13:35:19.611653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.286 [2024-12-05 13:35:19.611981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.611992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.612174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.612185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.286 [2024-12-05 13:35:19.612510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.612521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.612808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.612818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.613129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.613139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.613322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.613331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.613610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.613620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.613923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.613934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.614106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.614117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.614451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.614461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.614625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.614636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.614805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.614815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.615107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.615117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.615434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.615444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.615607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.615617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.615918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.615928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.616094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.616103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.616311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.616322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.616669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.616679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.616840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.616850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.617170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.617181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.617493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.617502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.617894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.617905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.617963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.617972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.618177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.618186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.618518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.618527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.618699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.618709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.618922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.618932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.286 [2024-12-05 13:35:19.619232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.286 [2024-12-05 13:35:19.619242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.286 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.619554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.619564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.619847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.619857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.620033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.620044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.620367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.620377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.620700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.620710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.620912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.620923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.621217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.621227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.621451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.621461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.621638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.621648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.621875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.621885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.622261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.622271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.622464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.622483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.622677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.622687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.623005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.623016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.623360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.623372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.623668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.623678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.623997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.624008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.624337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.624347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.624659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.624669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.624948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.624959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.625144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.625153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.625555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.625565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.625736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.625747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.287 [2024-12-05 13:35:19.626111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.287 [2024-12-05 13:35:19.626122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.287 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.626416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.626426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.626728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.626737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.626916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.626928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.627287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.627297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.627605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.627615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.627781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.627790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.628035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.628045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.628411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.628421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.628710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.628720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.628887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.628904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.629106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.629116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.629412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.629752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.629762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.630144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.630154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.630456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.630466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.630515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.630525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.630741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.630751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.631102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.631116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.631497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.631506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.631763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.631773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.631947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.631957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.632199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.632208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.632507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.632516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.632699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.632711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.632989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.632999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.633179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.633189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.633563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.633573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.633738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.633748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.288 qpair failed and we were unable to recover it. 00:30:57.288 [2024-12-05 13:35:19.634104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.288 [2024-12-05 13:35:19.634114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.634304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.634321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.634656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.634666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.634982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.634993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.635313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.635324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.635642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.635652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.635948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.635958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.636293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.636303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.636512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.636529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.636845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.636855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.637166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.637177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.637377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.637388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.637719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.637730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.637916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.637926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.638128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.638138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.638299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.638308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.638585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.638595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.638918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.638928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.639228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.639239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.639642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.639651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.639995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.640005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.640196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.640206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.640403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.640413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.640724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.640734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.641040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.641050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.641383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.641393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.641734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.641743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 Malloc0 00:30:57.289 [2024-12-05 13:35:19.642066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.642076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.642327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.642337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.289 [2024-12-05 13:35:19.642548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.289 [2024-12-05 13:35:19.642558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.289 qpair failed and we were unable to recover it. 00:30:57.290 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.290 [2024-12-05 13:35:19.642908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.642921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.643082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.643092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:57.290 [2024-12-05 13:35:19.643292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.643302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.290 [2024-12-05 13:35:19.643505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.643515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.290 [2024-12-05 13:35:19.643886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.643897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.644197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.644207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.644375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.644385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.644548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.644557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.644849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.644859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.645044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.645054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.645358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.645368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.645712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.645722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.646070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.646081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.646474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.646484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.646774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.646784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.647090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.647100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.647422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.647431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.647744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.647753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.647948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.647961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.648298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.648308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.648492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.648503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.648837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.648847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.649150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.649160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.649404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.290 [2024-12-05 13:35:19.649458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.649468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.649770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.649781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.650146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.650157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.650307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.290 [2024-12-05 13:35:19.650316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.290 qpair failed and we were unable to recover it. 00:30:57.290 [2024-12-05 13:35:19.650534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.650544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.650866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.650876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.651185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.651195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.651486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.651496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.651715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.651725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.651887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.651897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.652317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.652327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.652621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.652631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.652937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.652947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.653111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.653120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.653445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.653455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.653782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.653792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.654094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.654104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.654429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.654438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.654776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.654786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.655001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.655012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.655335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.655345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.655634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.655643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.655959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.655969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.656318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.656331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.656550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.656561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.656799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.656809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.657136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.657146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.657326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.657335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.657663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.657672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.657850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.657864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.658196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.658205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 [2024-12-05 13:35:19.658390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.658400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.291 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.291 [2024-12-05 13:35:19.658752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.291 [2024-12-05 13:35:19.658762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.291 qpair failed and we were unable to recover it. 00:30:57.292 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:57.292 [2024-12-05 13:35:19.659078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.659089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.292 [2024-12-05 13:35:19.659267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.659277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.292 [2024-12-05 13:35:19.659497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.659507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.659719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.659729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.660035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.660046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.660229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.660238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.660424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.660434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.660762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.660772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.660941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.660989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.661317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.661328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.661632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.661643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.661952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.661963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.662140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.662150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.662438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.662447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.662735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.662745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.662943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.662954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.663327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.663337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.663626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.663636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.663939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.663949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.664242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.664252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.664592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.664602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.664901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.664913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.665091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.665101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.665311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.665321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.665546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.665557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.665749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.665759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.665946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.665957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.666276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.292 [2024-12-05 13:35:19.666285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.292 qpair failed and we were unable to recover it. 00:30:57.292 [2024-12-05 13:35:19.666456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.666466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.666634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.666644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.666688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.666697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.667041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.667051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.667390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.667400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.667702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.667712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.667927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.667938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.668128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.668138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.668478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.668488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.668538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.668549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.668877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.668887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.669106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.669116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.669483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.669493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.669698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.669708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.670048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.670058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.670361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.670371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.293 [2024-12-05 13:35:19.670688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.670698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:57.293 [2024-12-05 13:35:19.671010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.671021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.293 [2024-12-05 13:35:19.671340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.671350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.293 [2024-12-05 13:35:19.671678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.671689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.672036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.672047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.672372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.672383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.672701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.672712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.293 qpair failed and we were unable to recover it. 00:30:57.293 [2024-12-05 13:35:19.673019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.293 [2024-12-05 13:35:19.673030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.673347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.673357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.673582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.673592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.673784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.673794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.674104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.674115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.674284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.674296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.674493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.674503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.674799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.674809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.674992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.675003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.675333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.675345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.675659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.675669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.675856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.675871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.676164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.676174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.676499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.676509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.676683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.676694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.676979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.676990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.677308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.677319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.677626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.677636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.677873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.677884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.678184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.678194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.678472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.678482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.678808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.678818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.679005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.679017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.679297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.679307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.679638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.679648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.679988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.679999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.680298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.680307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.680700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.680709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.681031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.681042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.294 qpair failed and we were unable to recover it. 00:30:57.294 [2024-12-05 13:35:19.681206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.294 [2024-12-05 13:35:19.681222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.681257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d37030 (9): Bad file descriptor 00:30:57.295 [2024-12-05 13:35:19.681619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.681699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd854000b90 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.682113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.682204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd854000b90 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.295 [2024-12-05 13:35:19.682674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.682711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd854000b90 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.682818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.682829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.295 [2024-12-05 13:35:19.683143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.683155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.295 [2024-12-05 13:35:19.683291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.683302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.295 [2024-12-05 13:35:19.683627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.683638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.683972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.683982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.684292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.684302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.684516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.684526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.684640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.684650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.684860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.684875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.685194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.685203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.685251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.685260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.685604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.685615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.685929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.685940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.686159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.686168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.686383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.686395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.686711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.686721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.686805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.686815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.687039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.687049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.687438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.687448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.687641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.687651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.687944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.687955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.688294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.688305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.688602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.295 [2024-12-05 13:35:19.688613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.295 qpair failed and we were unable to recover it. 00:30:57.295 [2024-12-05 13:35:19.688803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.296 [2024-12-05 13:35:19.688813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.689177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.296 [2024-12-05 13:35:19.689187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.689490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.296 [2024-12-05 13:35:19.689501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d3a490 with addr=10.0.0.2, port=4420 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.689638] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.296 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.296 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.296 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.296 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:57.296 [2024-12-05 13:35:19.700259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.700333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.700352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.700361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.700367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.700387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.296 13:35:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1135229 00:30:57.296 [2024-12-05 13:35:19.710265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.710328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.710345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.710352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.710359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.710375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.720322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.720398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.720413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.720420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.720426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.720440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.730167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.730228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.730244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.730251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.730257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.730272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.740254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.740309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.740324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.740331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.740337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.740351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.750243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.750295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.750309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.750316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.750322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.750336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.760272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.760332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.760345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.760353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.760359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.760373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.770308] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.770367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.296 [2024-12-05 13:35:19.770380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.296 [2024-12-05 13:35:19.770388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.296 [2024-12-05 13:35:19.770394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.296 [2024-12-05 13:35:19.770408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.296 qpair failed and we were unable to recover it. 00:30:57.296 [2024-12-05 13:35:19.780349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.296 [2024-12-05 13:35:19.780405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.297 [2024-12-05 13:35:19.780422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.297 [2024-12-05 13:35:19.780430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.297 [2024-12-05 13:35:19.780436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.297 [2024-12-05 13:35:19.780450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.297 qpair failed and we were unable to recover it. 00:30:57.297 [2024-12-05 13:35:19.790361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.297 [2024-12-05 13:35:19.790415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.297 [2024-12-05 13:35:19.790429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.297 [2024-12-05 13:35:19.790436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.297 [2024-12-05 13:35:19.790442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.297 [2024-12-05 13:35:19.790455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.297 qpair failed and we were unable to recover it. 00:30:57.297 [2024-12-05 13:35:19.800271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.297 [2024-12-05 13:35:19.800322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.297 [2024-12-05 13:35:19.800335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.297 [2024-12-05 13:35:19.800342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.297 [2024-12-05 13:35:19.800349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.297 [2024-12-05 13:35:19.800362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.297 qpair failed and we were unable to recover it. 00:30:57.297 [2024-12-05 13:35:19.810425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.297 [2024-12-05 13:35:19.810524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.297 [2024-12-05 13:35:19.810538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.297 [2024-12-05 13:35:19.810545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.297 [2024-12-05 13:35:19.810552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.297 [2024-12-05 13:35:19.810566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.297 qpair failed and we were unable to recover it. 00:30:57.297 [2024-12-05 13:35:19.820446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.297 [2024-12-05 13:35:19.820508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.297 [2024-12-05 13:35:19.820522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.297 [2024-12-05 13:35:19.820529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.297 [2024-12-05 13:35:19.820539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.297 [2024-12-05 13:35:19.820552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.297 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.830376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.830482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.830496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.558 [2024-12-05 13:35:19.830503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.558 [2024-12-05 13:35:19.830511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.558 [2024-12-05 13:35:19.830524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.558 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.840487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.840574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.840588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.558 [2024-12-05 13:35:19.840597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.558 [2024-12-05 13:35:19.840603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.558 [2024-12-05 13:35:19.840616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.558 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.850517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.850612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.850626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.558 [2024-12-05 13:35:19.850634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.558 [2024-12-05 13:35:19.850640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.558 [2024-12-05 13:35:19.850654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.558 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.860516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.860608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.860622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.558 [2024-12-05 13:35:19.860630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.558 [2024-12-05 13:35:19.860636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.558 [2024-12-05 13:35:19.860650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.558 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.870573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.870637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.870663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.558 [2024-12-05 13:35:19.870672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.558 [2024-12-05 13:35:19.870680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.558 [2024-12-05 13:35:19.870701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.558 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.880574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.880672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.880698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.558 [2024-12-05 13:35:19.880708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.558 [2024-12-05 13:35:19.880716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.558 [2024-12-05 13:35:19.880736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.558 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.890680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.890764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.890779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.558 [2024-12-05 13:35:19.890787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.558 [2024-12-05 13:35:19.890794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.558 [2024-12-05 13:35:19.890809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.558 qpair failed and we were unable to recover it. 00:30:57.558 [2024-12-05 13:35:19.900673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.558 [2024-12-05 13:35:19.900736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.558 [2024-12-05 13:35:19.900751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.900759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.900766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.900785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.910675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.910749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.910769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.910776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.910784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.910798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.920919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.920986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.921000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.921008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.921015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.921029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.930783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.930841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.930855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.930866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.930873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.930887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.940821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.940877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.940892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.940899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.940906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.940920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.950708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.950771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.950785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.950792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.950803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.950817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.960818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.960919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.960934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.960942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.960948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.960962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.970840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.970904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.970918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.970925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.970932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.970946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.980753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.980812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.980826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.980834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.980840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.980853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:19.990891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:19.990947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:19.990961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:19.990968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:19.990974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:19.990988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:20.000916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.559 [2024-12-05 13:35:20.000998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.559 [2024-12-05 13:35:20.001012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.559 [2024-12-05 13:35:20.001020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.559 [2024-12-05 13:35:20.001027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.559 [2024-12-05 13:35:20.001041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.559 qpair failed and we were unable to recover it. 00:30:57.559 [2024-12-05 13:35:20.010968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.011029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.011044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.011051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.011058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.011072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.021010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.021073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.021087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.021095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.021101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.021115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.030912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.030969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.030982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.030990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.030996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.031010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.041036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.041095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.041112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.041119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.041126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.041140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.051111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.051178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.051192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.051199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.051206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.051219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.061116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.061174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.061188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.061195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.061203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.061216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.071099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.071167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.071181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.071188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.071195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.071209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.081153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.081206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.081220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.081227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.081237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.081252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.091198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.091277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.091290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.091297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.091304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.091319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.101252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.101306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.101319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.101326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.560 [2024-12-05 13:35:20.101332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.560 [2024-12-05 13:35:20.101347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.560 qpair failed and we were unable to recover it. 00:30:57.560 [2024-12-05 13:35:20.111225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.560 [2024-12-05 13:35:20.111285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.560 [2024-12-05 13:35:20.111299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.560 [2024-12-05 13:35:20.111306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.561 [2024-12-05 13:35:20.111313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.561 [2024-12-05 13:35:20.111327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.561 qpair failed and we were unable to recover it. 00:30:57.561 [2024-12-05 13:35:20.121276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.561 [2024-12-05 13:35:20.121335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.561 [2024-12-05 13:35:20.121350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.561 [2024-12-05 13:35:20.121357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.561 [2024-12-05 13:35:20.121364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.561 [2024-12-05 13:35:20.121377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.561 qpair failed and we were unable to recover it. 00:30:57.823 [2024-12-05 13:35:20.131293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.823 [2024-12-05 13:35:20.131366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.823 [2024-12-05 13:35:20.131381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.823 [2024-12-05 13:35:20.131388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.823 [2024-12-05 13:35:20.131395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.823 [2024-12-05 13:35:20.131410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.823 qpair failed and we were unable to recover it. 00:30:57.823 [2024-12-05 13:35:20.141215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.823 [2024-12-05 13:35:20.141270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.823 [2024-12-05 13:35:20.141284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.823 [2024-12-05 13:35:20.141292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.823 [2024-12-05 13:35:20.141299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.823 [2024-12-05 13:35:20.141313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.823 qpair failed and we were unable to recover it. 00:30:57.823 [2024-12-05 13:35:20.151236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.823 [2024-12-05 13:35:20.151294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.151308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.151315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.151322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.151335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.161373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.161431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.161444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.161453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.161460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.161474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.171427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.171486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.171502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.171510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.171516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.171530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.181312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.181368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.181381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.181389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.181395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.181409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.191332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.191386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.191400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.191407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.191413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.191427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.201496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.201554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.201568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.201576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.201582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.201596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.211561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.211625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.211639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.211646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.211657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.211671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.221555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.221657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.221671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.221679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.221686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.221700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.231546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.231600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.231613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.231621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.231627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.231641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.241577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.241670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.241684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.241691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.241698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.241712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.824 [2024-12-05 13:35:20.251624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.824 [2024-12-05 13:35:20.251688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.824 [2024-12-05 13:35:20.251701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.824 [2024-12-05 13:35:20.251708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.824 [2024-12-05 13:35:20.251715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.824 [2024-12-05 13:35:20.251729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.824 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.261660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.261722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.261736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.261743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.261750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.261764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.271669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.271743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.271757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.271764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.271771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.271786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.281706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.281761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.281775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.281782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.281789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.281802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.291607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.291665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.291679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.291686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.291693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.291706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.301667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.301729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.301746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.301753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.301760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.301773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.311788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.311837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.311851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.311858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.311870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.311884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.321803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.321857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.321876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.321883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.321890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.321904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.331718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.331772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.331785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.331793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.331799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.331813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.341898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.341955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.341969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.341976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.341986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.342001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.351885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.351934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.351947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.825 [2024-12-05 13:35:20.351955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.825 [2024-12-05 13:35:20.351961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.825 [2024-12-05 13:35:20.351975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.825 qpair failed and we were unable to recover it. 00:30:57.825 [2024-12-05 13:35:20.361913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.825 [2024-12-05 13:35:20.361974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.825 [2024-12-05 13:35:20.361987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.826 [2024-12-05 13:35:20.361995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.826 [2024-12-05 13:35:20.362001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.826 [2024-12-05 13:35:20.362015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.826 qpair failed and we were unable to recover it. 00:30:57.826 [2024-12-05 13:35:20.371932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.826 [2024-12-05 13:35:20.371994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.826 [2024-12-05 13:35:20.372008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.826 [2024-12-05 13:35:20.372015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.826 [2024-12-05 13:35:20.372021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.826 [2024-12-05 13:35:20.372035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.826 qpair failed and we were unable to recover it. 00:30:57.826 [2024-12-05 13:35:20.381882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:57.826 [2024-12-05 13:35:20.381982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:57.826 [2024-12-05 13:35:20.381995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:57.826 [2024-12-05 13:35:20.382003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:57.826 [2024-12-05 13:35:20.382010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:57.826 [2024-12-05 13:35:20.382024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:57.826 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.391990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.392043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.392057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.392064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.088 [2024-12-05 13:35:20.392071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.088 [2024-12-05 13:35:20.392085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.088 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.402034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.402136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.402150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.402157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.088 [2024-12-05 13:35:20.402164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.088 [2024-12-05 13:35:20.402178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.088 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.412038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.412134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.412148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.412157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.088 [2024-12-05 13:35:20.412164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.088 [2024-12-05 13:35:20.412179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.088 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.422101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.422160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.422173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.422181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.088 [2024-12-05 13:35:20.422187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.088 [2024-12-05 13:35:20.422201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.088 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.432030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.432096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.432115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.432122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.088 [2024-12-05 13:35:20.432129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.088 [2024-12-05 13:35:20.432143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.088 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.442156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.442211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.442224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.442232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.088 [2024-12-05 13:35:20.442239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.088 [2024-12-05 13:35:20.442252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.088 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.452188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.452247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.452260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.452267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.088 [2024-12-05 13:35:20.452274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.088 [2024-12-05 13:35:20.452288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.088 qpair failed and we were unable to recover it. 00:30:58.088 [2024-12-05 13:35:20.462182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.088 [2024-12-05 13:35:20.462284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.088 [2024-12-05 13:35:20.462298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.088 [2024-12-05 13:35:20.462305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.462313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.462327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.472209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.472300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.472314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.472321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.472332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.472346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.482120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.482178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.482192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.482199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.482206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.482220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.492309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.492376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.492390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.492397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.492403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.492417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.502325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.502382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.502395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.502402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.502409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.502423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.512357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.512416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.512429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.512437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.512443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.512457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.522379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.522430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.522443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.522451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.522457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.522470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.532430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.532485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.532499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.532506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.532512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.532526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.542310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.542364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.542377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.542385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.542391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.542405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.552452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.552501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.552514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.552521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.552527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.552541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.562469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.562565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.562581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.562589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.562595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.089 [2024-12-05 13:35:20.562609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.089 qpair failed and we were unable to recover it. 00:30:58.089 [2024-12-05 13:35:20.572507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.089 [2024-12-05 13:35:20.572612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.089 [2024-12-05 13:35:20.572625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.089 [2024-12-05 13:35:20.572633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.089 [2024-12-05 13:35:20.572639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.572653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.582518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.090 [2024-12-05 13:35:20.582586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.090 [2024-12-05 13:35:20.582600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.090 [2024-12-05 13:35:20.582607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.090 [2024-12-05 13:35:20.582614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.582627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.592544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.090 [2024-12-05 13:35:20.592601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.090 [2024-12-05 13:35:20.592614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.090 [2024-12-05 13:35:20.592621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.090 [2024-12-05 13:35:20.592628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.592641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.602629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.090 [2024-12-05 13:35:20.602696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.090 [2024-12-05 13:35:20.602709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.090 [2024-12-05 13:35:20.602716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.090 [2024-12-05 13:35:20.602726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.602740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.612628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.090 [2024-12-05 13:35:20.612687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.090 [2024-12-05 13:35:20.612713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.090 [2024-12-05 13:35:20.612722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.090 [2024-12-05 13:35:20.612729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.612749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.622633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.090 [2024-12-05 13:35:20.622702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.090 [2024-12-05 13:35:20.622728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.090 [2024-12-05 13:35:20.622737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.090 [2024-12-05 13:35:20.622744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.622763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.632661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.090 [2024-12-05 13:35:20.632718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.090 [2024-12-05 13:35:20.632734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.090 [2024-12-05 13:35:20.632741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.090 [2024-12-05 13:35:20.632748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.632763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.642683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.090 [2024-12-05 13:35:20.642745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.090 [2024-12-05 13:35:20.642760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.090 [2024-12-05 13:35:20.642767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.090 [2024-12-05 13:35:20.642774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.090 [2024-12-05 13:35:20.642788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.090 qpair failed and we were unable to recover it. 00:30:58.090 [2024-12-05 13:35:20.652710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.652771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.652787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.652795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.652803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.652816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.662740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.662797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.662813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.662821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.662828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.662844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.672767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.672854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.672872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.672880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.672886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.672901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.682682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.682749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.682763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.682771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.682777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.682791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.692774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.692873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.692890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.692898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.692904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.692919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.702883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.702941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.702955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.702962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.702969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.702982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.712890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.712944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.712957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.712965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.712971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.712986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.722967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.723032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.723046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.723054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.723060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.723075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.732850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.732910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.732924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.732932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.732943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.732957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.742956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.743007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.743021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.743028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.743035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.743049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.753059] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.753115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.753128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.753135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.753142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.753156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.762996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.763090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.763105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.763112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.763119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.763133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.773082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.773146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.773159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.773166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.773173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.773186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.782974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.783033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.783046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.354 [2024-12-05 13:35:20.783054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.354 [2024-12-05 13:35:20.783060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.354 [2024-12-05 13:35:20.783074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.354 qpair failed and we were unable to recover it. 00:30:58.354 [2024-12-05 13:35:20.793098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.354 [2024-12-05 13:35:20.793157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.354 [2024-12-05 13:35:20.793170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.793177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.793184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.793197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.803005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.803079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.803093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.803100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.803107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.803120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.813166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.813225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.813238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.813246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.813252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.813266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.823200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.823254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.823271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.823278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.823285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.823299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.833214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.833334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.833350] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.833357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.833364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.833377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.843243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.843319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.843333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.843340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.843346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.843360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.853169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.853229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.853242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.853249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.853256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.853269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.863188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.863243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.863256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.863267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.863274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.863287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.873337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.873408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.873421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.873428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.873435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.873448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.883349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.883400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.883414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.883422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.883428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.883442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.893273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.893329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.893343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.893350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.893357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.893370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.903367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.903438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.903452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.903459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.903466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.903480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.355 [2024-12-05 13:35:20.913438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.355 [2024-12-05 13:35:20.913490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.355 [2024-12-05 13:35:20.913505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.355 [2024-12-05 13:35:20.913513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.355 [2024-12-05 13:35:20.913520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.355 [2024-12-05 13:35:20.913534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.355 qpair failed and we were unable to recover it. 00:30:58.619 [2024-12-05 13:35:20.923474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.619 [2024-12-05 13:35:20.923527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.619 [2024-12-05 13:35:20.923541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.619 [2024-12-05 13:35:20.923549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.619 [2024-12-05 13:35:20.923555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.619 [2024-12-05 13:35:20.923569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.619 qpair failed and we were unable to recover it. 00:30:58.619 [2024-12-05 13:35:20.933487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.619 [2024-12-05 13:35:20.933579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.619 [2024-12-05 13:35:20.933592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.619 [2024-12-05 13:35:20.933600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.619 [2024-12-05 13:35:20.933607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.619 [2024-12-05 13:35:20.933620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.619 qpair failed and we were unable to recover it. 00:30:58.619 [2024-12-05 13:35:20.943502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.619 [2024-12-05 13:35:20.943569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.619 [2024-12-05 13:35:20.943583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.619 [2024-12-05 13:35:20.943590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:20.943597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:20.943610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:20.953565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:20.953620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:20.953637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:20.953645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:20.953651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:20.953665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:20.963562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:20.963641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:20.963654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:20.963662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:20.963669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:20.963683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:20.973496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:20.973555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:20.973568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:20.973576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:20.973584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:20.973597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:20.983645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:20.983699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:20.983713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:20.983721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:20.983728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:20.983742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:20.993664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:20.993743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:20.993756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:20.993767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:20.993774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:20.993788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:21.003703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:21.003762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:21.003776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:21.003784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:21.003791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:21.003804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:21.013725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:21.013781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:21.013795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:21.013802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:21.013808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:21.013822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:21.023704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:21.023766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:21.023780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:21.023787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:21.023794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:21.023808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:21.033773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:21.033860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:21.033878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:21.033886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:21.033892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:21.033906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:21.043806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.620 [2024-12-05 13:35:21.043865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.620 [2024-12-05 13:35:21.043879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.620 [2024-12-05 13:35:21.043887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.620 [2024-12-05 13:35:21.043893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.620 [2024-12-05 13:35:21.043907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.620 qpair failed and we were unable to recover it. 00:30:58.620 [2024-12-05 13:35:21.053869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.053935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.053951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.053958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.053968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.053983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.063878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.063932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.063947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.063955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.063961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.063975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.073902] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.073951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.073965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.073972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.073979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.073993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.083909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.083964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.083981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.083989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.083995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.084010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.093990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.094067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.094081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.094088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.094095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.094110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.103888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.103947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.103960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.103968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.103974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.103989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.113984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.114036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.114050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.114057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.114064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.114077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.124041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.124097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.124112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.124123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.124129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.124143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.134083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.134167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.134181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.134189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.134195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.134209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.621 qpair failed and we were unable to recover it. 00:30:58.621 [2024-12-05 13:35:21.144095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.621 [2024-12-05 13:35:21.144153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.621 [2024-12-05 13:35:21.144167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.621 [2024-12-05 13:35:21.144174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.621 [2024-12-05 13:35:21.144181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.621 [2024-12-05 13:35:21.144195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.622 qpair failed and we were unable to recover it. 00:30:58.622 [2024-12-05 13:35:21.154114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.622 [2024-12-05 13:35:21.154167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.622 [2024-12-05 13:35:21.154180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.622 [2024-12-05 13:35:21.154187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.622 [2024-12-05 13:35:21.154194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.622 [2024-12-05 13:35:21.154207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.622 qpair failed and we were unable to recover it. 00:30:58.622 [2024-12-05 13:35:21.164144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.622 [2024-12-05 13:35:21.164198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.622 [2024-12-05 13:35:21.164212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.622 [2024-12-05 13:35:21.164219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.622 [2024-12-05 13:35:21.164226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.622 [2024-12-05 13:35:21.164239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.622 qpair failed and we were unable to recover it. 00:30:58.622 [2024-12-05 13:35:21.174169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.622 [2024-12-05 13:35:21.174253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.622 [2024-12-05 13:35:21.174267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.622 [2024-12-05 13:35:21.174274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.622 [2024-12-05 13:35:21.174281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.622 [2024-12-05 13:35:21.174295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.622 qpair failed and we were unable to recover it. 00:30:58.884 [2024-12-05 13:35:21.184173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.884 [2024-12-05 13:35:21.184229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.884 [2024-12-05 13:35:21.184242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.884 [2024-12-05 13:35:21.184250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.884 [2024-12-05 13:35:21.184256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.884 [2024-12-05 13:35:21.184270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.884 qpair failed and we were unable to recover it. 00:30:58.884 [2024-12-05 13:35:21.194238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.884 [2024-12-05 13:35:21.194293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.884 [2024-12-05 13:35:21.194307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.884 [2024-12-05 13:35:21.194314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.884 [2024-12-05 13:35:21.194321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.884 [2024-12-05 13:35:21.194334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.884 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.204144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.204235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.204249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.204257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.204263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.204276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.214298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.214358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.214375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.214382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.214389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.214402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.224313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.224374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.224388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.224395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.224401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.224415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.234334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.234393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.234406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.234413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.234420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.234434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.244365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.244421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.244434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.244442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.244448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.244462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.254400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.254455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.254469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.254480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.254486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.254500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.264441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.264498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.264511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.264519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.264525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.264540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.274422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.274476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.274490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.274497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.274504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.274517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.284477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.284532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.284546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.284553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.284560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.284574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.294510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.294569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.294583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.294590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.294597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.294611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.885 [2024-12-05 13:35:21.304549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.885 [2024-12-05 13:35:21.304617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.885 [2024-12-05 13:35:21.304642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.885 [2024-12-05 13:35:21.304651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.885 [2024-12-05 13:35:21.304659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.885 [2024-12-05 13:35:21.304679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.885 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.314564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.314667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.314694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.314703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.314710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.314730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.324607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.324708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.324724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.324732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.324740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.324755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.334618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.334678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.334692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.334700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.334706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.334720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.344638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.344698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.344712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.344720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.344726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.344741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.354673] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.354727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.354740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.354747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.354754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.354769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.364583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.364639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.364652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.364660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.364667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.364680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.374740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.374832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.374846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.374854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.374865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.374880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.384757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.384833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.384847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.384858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.384869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.384884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.394718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.394775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.394788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.394795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.394802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.394816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.404831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.404887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.404902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.404909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.404916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.404931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.414803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.886 [2024-12-05 13:35:21.414907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.886 [2024-12-05 13:35:21.414922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.886 [2024-12-05 13:35:21.414931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.886 [2024-12-05 13:35:21.414939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.886 [2024-12-05 13:35:21.414953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.886 qpair failed and we were unable to recover it. 00:30:58.886 [2024-12-05 13:35:21.424882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.887 [2024-12-05 13:35:21.424977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.887 [2024-12-05 13:35:21.424990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.887 [2024-12-05 13:35:21.424998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.887 [2024-12-05 13:35:21.425005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.887 [2024-12-05 13:35:21.425019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.887 qpair failed and we were unable to recover it. 00:30:58.887 [2024-12-05 13:35:21.434880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.887 [2024-12-05 13:35:21.434935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.887 [2024-12-05 13:35:21.434949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.887 [2024-12-05 13:35:21.434956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.887 [2024-12-05 13:35:21.434963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.887 [2024-12-05 13:35:21.434977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.887 qpair failed and we were unable to recover it. 00:30:58.887 [2024-12-05 13:35:21.444905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:58.887 [2024-12-05 13:35:21.444956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:58.887 [2024-12-05 13:35:21.444969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:58.887 [2024-12-05 13:35:21.444976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:58.887 [2024-12-05 13:35:21.444983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:58.887 [2024-12-05 13:35:21.444997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:58.887 qpair failed and we were unable to recover it. 00:30:59.149 [2024-12-05 13:35:21.454961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.149 [2024-12-05 13:35:21.455020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.149 [2024-12-05 13:35:21.455034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.149 [2024-12-05 13:35:21.455041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.149 [2024-12-05 13:35:21.455048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.149 [2024-12-05 13:35:21.455062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.149 qpair failed and we were unable to recover it. 00:30:59.149 [2024-12-05 13:35:21.464901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.149 [2024-12-05 13:35:21.465037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.149 [2024-12-05 13:35:21.465051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.149 [2024-12-05 13:35:21.465059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.149 [2024-12-05 13:35:21.465066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.149 [2024-12-05 13:35:21.465080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.149 qpair failed and we were unable to recover it. 00:30:59.149 [2024-12-05 13:35:21.475006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.149 [2024-12-05 13:35:21.475065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.149 [2024-12-05 13:35:21.475079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.149 [2024-12-05 13:35:21.475086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.149 [2024-12-05 13:35:21.475093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.149 [2024-12-05 13:35:21.475107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.149 qpair failed and we were unable to recover it. 00:30:59.149 [2024-12-05 13:35:21.485022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.149 [2024-12-05 13:35:21.485119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.149 [2024-12-05 13:35:21.485133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.149 [2024-12-05 13:35:21.485141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.149 [2024-12-05 13:35:21.485147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.149 [2024-12-05 13:35:21.485161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.149 qpair failed and we were unable to recover it. 00:30:59.149 [2024-12-05 13:35:21.495069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.149 [2024-12-05 13:35:21.495125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.149 [2024-12-05 13:35:21.495139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.149 [2024-12-05 13:35:21.495146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.149 [2024-12-05 13:35:21.495153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.149 [2024-12-05 13:35:21.495167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.149 qpair failed and we were unable to recover it. 00:30:59.149 [2024-12-05 13:35:21.504967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.149 [2024-12-05 13:35:21.505022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.505035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.505042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.505049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.505062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.515120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.515206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.515219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.515236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.515243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.515256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.525026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.525091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.525105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.525113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.525119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.525133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.535184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.535241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.535255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.535262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.535269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.535282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.545228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.545283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.545297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.545304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.545311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.545325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.555230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.555319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.555332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.555340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.555347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.555361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.565270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.565365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.565379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.565387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.565394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.565408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.575346] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.575411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.575425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.575432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.575439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.575452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.585332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.585390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.585403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.585411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.585417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.585430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.595322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.595376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.595390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.595397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.150 [2024-12-05 13:35:21.595404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.150 [2024-12-05 13:35:21.595417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.150 qpair failed and we were unable to recover it. 00:30:59.150 [2024-12-05 13:35:21.605362] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.150 [2024-12-05 13:35:21.605417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.150 [2024-12-05 13:35:21.605431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.150 [2024-12-05 13:35:21.605438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.605445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.605458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.615276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.615354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.615367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.615375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.615381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.615395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.625428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.625490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.625503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.625511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.625518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.625531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.635480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.635565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.635579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.635586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.635593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.635607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.645477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.645536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.645550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.645562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.645569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.645582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.655520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.655574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.655588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.655595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.655602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.655616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.665531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.665626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.665653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.665663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.665670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.665690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.675546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.675603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.675619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.675627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.675633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.675649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.685585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.685674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.685688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.685697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.685704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.685723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.695564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.695627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.695642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.151 [2024-12-05 13:35:21.695649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.151 [2024-12-05 13:35:21.695656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.151 [2024-12-05 13:35:21.695671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.151 qpair failed and we were unable to recover it. 00:30:59.151 [2024-12-05 13:35:21.705663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.151 [2024-12-05 13:35:21.705768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.151 [2024-12-05 13:35:21.705794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.152 [2024-12-05 13:35:21.705803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.152 [2024-12-05 13:35:21.705811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.152 [2024-12-05 13:35:21.705831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.152 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.715583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.715642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.715658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.715666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.715673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.715689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.725612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.725717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.725733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.725741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.725748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.725762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.735743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.735801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.735815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.735823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.735829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.735843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.745790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.745841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.745855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.745868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.745876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.745891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.755778] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.755838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.755853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.755860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.755876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.755891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.765852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.765925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.765939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.765947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.765954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.765969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.775823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.775885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.775900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.775911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.775918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.775932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.785880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.785938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.785952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.785959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.785966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.785980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.795776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.795835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.795849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.795856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.795866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.415 [2024-12-05 13:35:21.795881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.415 qpair failed and we were unable to recover it. 00:30:59.415 [2024-12-05 13:35:21.805928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.415 [2024-12-05 13:35:21.805997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.415 [2024-12-05 13:35:21.806011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.415 [2024-12-05 13:35:21.806018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.415 [2024-12-05 13:35:21.806025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.806039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.815973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.816089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.816103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.816110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.816117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.816134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.825978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.826081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.826095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.826102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.826109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.826123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.836030] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.836081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.836094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.836101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.836108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.836122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.846019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.846071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.846085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.846092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.846099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.846112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.856006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.856063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.856077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.856084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.856090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.856104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.866000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.866058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.866072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.866080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.866086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.866100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.876173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.876244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.876257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.876265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.876271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.876286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.886153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.886211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.886224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.886232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.886238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.886252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.896180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.896263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.896276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.896283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.896290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.896304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.906113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.416 [2024-12-05 13:35:21.906174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.416 [2024-12-05 13:35:21.906188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.416 [2024-12-05 13:35:21.906199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.416 [2024-12-05 13:35:21.906205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.416 [2024-12-05 13:35:21.906219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.416 qpair failed and we were unable to recover it. 00:30:59.416 [2024-12-05 13:35:21.916235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.417 [2024-12-05 13:35:21.916291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.417 [2024-12-05 13:35:21.916304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.417 [2024-12-05 13:35:21.916312] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.417 [2024-12-05 13:35:21.916318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.417 [2024-12-05 13:35:21.916333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.417 qpair failed and we were unable to recover it. 00:30:59.417 [2024-12-05 13:35:21.926264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.417 [2024-12-05 13:35:21.926325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.417 [2024-12-05 13:35:21.926338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.417 [2024-12-05 13:35:21.926345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.417 [2024-12-05 13:35:21.926352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.417 [2024-12-05 13:35:21.926366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.417 qpair failed and we were unable to recover it. 00:30:59.417 [2024-12-05 13:35:21.936406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.417 [2024-12-05 13:35:21.936486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.417 [2024-12-05 13:35:21.936500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.417 [2024-12-05 13:35:21.936508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.417 [2024-12-05 13:35:21.936515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.417 [2024-12-05 13:35:21.936534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.417 qpair failed and we were unable to recover it. 00:30:59.417 [2024-12-05 13:35:21.946396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.417 [2024-12-05 13:35:21.946453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.417 [2024-12-05 13:35:21.946467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.417 [2024-12-05 13:35:21.946474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.417 [2024-12-05 13:35:21.946481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.417 [2024-12-05 13:35:21.946499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.417 qpair failed and we were unable to recover it. 00:30:59.417 [2024-12-05 13:35:21.956400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.417 [2024-12-05 13:35:21.956460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.417 [2024-12-05 13:35:21.956473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.417 [2024-12-05 13:35:21.956480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.417 [2024-12-05 13:35:21.956487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.417 [2024-12-05 13:35:21.956501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.417 qpair failed and we were unable to recover it. 00:30:59.417 [2024-12-05 13:35:21.966314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.417 [2024-12-05 13:35:21.966384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.417 [2024-12-05 13:35:21.966398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.417 [2024-12-05 13:35:21.966406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.417 [2024-12-05 13:35:21.966412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.417 [2024-12-05 13:35:21.966427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.417 qpair failed and we were unable to recover it. 00:30:59.417 [2024-12-05 13:35:21.976417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.417 [2024-12-05 13:35:21.976471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.417 [2024-12-05 13:35:21.976485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.417 [2024-12-05 13:35:21.976493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.417 [2024-12-05 13:35:21.976500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.417 [2024-12-05 13:35:21.976514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.417 qpair failed and we were unable to recover it. 00:30:59.679 [2024-12-05 13:35:21.986461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.679 [2024-12-05 13:35:21.986517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.679 [2024-12-05 13:35:21.986530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.679 [2024-12-05 13:35:21.986538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.679 [2024-12-05 13:35:21.986544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.679 [2024-12-05 13:35:21.986558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.679 qpair failed and we were unable to recover it. 00:30:59.679 [2024-12-05 13:35:21.996461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.679 [2024-12-05 13:35:21.996521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.679 [2024-12-05 13:35:21.996534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.679 [2024-12-05 13:35:21.996542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.679 [2024-12-05 13:35:21.996548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.679 [2024-12-05 13:35:21.996562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.679 qpair failed and we were unable to recover it. 00:30:59.679 [2024-12-05 13:35:22.006469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.006531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.006557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.006566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.006575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.006594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.016535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.016601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.016627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.016636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.016644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.016664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.026455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.026513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.026529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.026537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.026544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.026559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.036485] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.036539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.036553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.036565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.036572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.036586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.046568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.046621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.046635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.046643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.046649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.046663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.056648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.056743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.056758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.056765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.056772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.056786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.066660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.066716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.066730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.066738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.066745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.066759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.076697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.076790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.076804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.076812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.076818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.076837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.086674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.086732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.086745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.086753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.086760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.086774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.096806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.680 [2024-12-05 13:35:22.096872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.680 [2024-12-05 13:35:22.096886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.680 [2024-12-05 13:35:22.096894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.680 [2024-12-05 13:35:22.096900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.680 [2024-12-05 13:35:22.096914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.680 qpair failed and we were unable to recover it. 00:30:59.680 [2024-12-05 13:35:22.106835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.106905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.106918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.106926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.106933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.106947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.116774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.116841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.116854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.116865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.116872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.116886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.126785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.126839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.126852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.126860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.126871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.126885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.136860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.136945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.136959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.136966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.136974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.136988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.146876] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.146932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.146946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.146953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.146960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.146973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.156901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.156958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.156971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.156979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.156986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.157000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.166890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.166967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.166981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.166994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.167001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.167015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.176827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.176885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.176899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.176906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.176913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.176927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.186996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.187054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.187068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.187075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.187081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.187096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.197004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.197086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.197100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.197107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.197115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.681 [2024-12-05 13:35:22.197129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.681 qpair failed and we were unable to recover it. 00:30:59.681 [2024-12-05 13:35:22.206996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.681 [2024-12-05 13:35:22.207053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.681 [2024-12-05 13:35:22.207068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.681 [2024-12-05 13:35:22.207076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.681 [2024-12-05 13:35:22.207086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.682 [2024-12-05 13:35:22.207104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.682 qpair failed and we were unable to recover it. 00:30:59.682 [2024-12-05 13:35:22.217065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.682 [2024-12-05 13:35:22.217123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.682 [2024-12-05 13:35:22.217138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.682 [2024-12-05 13:35:22.217146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.682 [2024-12-05 13:35:22.217152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.682 [2024-12-05 13:35:22.217166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.682 qpair failed and we were unable to recover it. 00:30:59.682 [2024-12-05 13:35:22.227094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.682 [2024-12-05 13:35:22.227148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.682 [2024-12-05 13:35:22.227161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.682 [2024-12-05 13:35:22.227168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.682 [2024-12-05 13:35:22.227175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.682 [2024-12-05 13:35:22.227188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.682 qpair failed and we were unable to recover it. 00:30:59.682 [2024-12-05 13:35:22.237130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.682 [2024-12-05 13:35:22.237195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.682 [2024-12-05 13:35:22.237209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.682 [2024-12-05 13:35:22.237216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.682 [2024-12-05 13:35:22.237223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.682 [2024-12-05 13:35:22.237236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.682 qpair failed and we were unable to recover it. 00:30:59.945 [2024-12-05 13:35:22.247149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.945 [2024-12-05 13:35:22.247241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.945 [2024-12-05 13:35:22.247255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.945 [2024-12-05 13:35:22.247263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.945 [2024-12-05 13:35:22.247270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.945 [2024-12-05 13:35:22.247284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.945 qpair failed and we were unable to recover it. 00:30:59.945 [2024-12-05 13:35:22.257060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.945 [2024-12-05 13:35:22.257123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.945 [2024-12-05 13:35:22.257138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.945 [2024-12-05 13:35:22.257145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.945 [2024-12-05 13:35:22.257154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.945 [2024-12-05 13:35:22.257170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.945 qpair failed and we were unable to recover it. 00:30:59.945 [2024-12-05 13:35:22.267194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.945 [2024-12-05 13:35:22.267249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.945 [2024-12-05 13:35:22.267263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.945 [2024-12-05 13:35:22.267271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.945 [2024-12-05 13:35:22.267278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.945 [2024-12-05 13:35:22.267292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.945 qpair failed and we were unable to recover it. 00:30:59.945 [2024-12-05 13:35:22.277227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.945 [2024-12-05 13:35:22.277283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.945 [2024-12-05 13:35:22.277296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.945 [2024-12-05 13:35:22.277304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.945 [2024-12-05 13:35:22.277310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.945 [2024-12-05 13:35:22.277324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.945 qpair failed and we were unable to recover it. 00:30:59.945 [2024-12-05 13:35:22.287212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.945 [2024-12-05 13:35:22.287254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.945 [2024-12-05 13:35:22.287268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.945 [2024-12-05 13:35:22.287275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.945 [2024-12-05 13:35:22.287282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.945 [2024-12-05 13:35:22.287296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.945 qpair failed and we were unable to recover it. 00:30:59.945 [2024-12-05 13:35:22.297305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.297365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.297378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.297389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.297395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.297409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.307365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.307420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.307433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.307441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.307448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.307461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.317254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.317359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.317373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.317380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.317387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.317401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.327328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.327381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.327394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.327402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.327408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.327422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.337394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.337451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.337464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.337472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.337478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.337496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.347436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.347493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.347506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.347514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.347520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.347534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.357443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.357494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.357507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.357515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.357521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.357535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.367427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.367474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.367487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.367495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.367502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.367515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.377400] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.377468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.377482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.377489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.377496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.377510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.946 [2024-12-05 13:35:22.387555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.946 [2024-12-05 13:35:22.387669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.946 [2024-12-05 13:35:22.387683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.946 [2024-12-05 13:35:22.387691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.946 [2024-12-05 13:35:22.387698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.946 [2024-12-05 13:35:22.387712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.946 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.397573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.397628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.397642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.397649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.397656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.397670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.407438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.407513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.407527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.407535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.407542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.407556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.417627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.417682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.417695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.417703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.417709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.417723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.427659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.427721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.427747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.427760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.427769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.427789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.437690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.437778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.437793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.437802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.437809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.437825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.447634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.447681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.447695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.447702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.447709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.447723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.457714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.457770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.457784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.457792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.457798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.457812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.467771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.467873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.467887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.467895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.467902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.467924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.477779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.477832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.477846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.477853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.477860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.477881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.487736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.947 [2024-12-05 13:35:22.487785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.947 [2024-12-05 13:35:22.487798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.947 [2024-12-05 13:35:22.487805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.947 [2024-12-05 13:35:22.487812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.947 [2024-12-05 13:35:22.487826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.947 qpair failed and we were unable to recover it. 00:30:59.947 [2024-12-05 13:35:22.497718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.948 [2024-12-05 13:35:22.497773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.948 [2024-12-05 13:35:22.497787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.948 [2024-12-05 13:35:22.497794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.948 [2024-12-05 13:35:22.497801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.948 [2024-12-05 13:35:22.497815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.948 qpair failed and we were unable to recover it. 00:30:59.948 [2024-12-05 13:35:22.507887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:59.948 [2024-12-05 13:35:22.507942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:59.948 [2024-12-05 13:35:22.507956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:59.948 [2024-12-05 13:35:22.507963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:59.948 [2024-12-05 13:35:22.507970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:30:59.948 [2024-12-05 13:35:22.507984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:59.948 qpair failed and we were unable to recover it. 00:31:00.209 [2024-12-05 13:35:22.517891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.209 [2024-12-05 13:35:22.517948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.209 [2024-12-05 13:35:22.517961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.209 [2024-12-05 13:35:22.517969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.209 [2024-12-05 13:35:22.517976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.209 [2024-12-05 13:35:22.517990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.209 qpair failed and we were unable to recover it. 00:31:00.209 [2024-12-05 13:35:22.527881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.209 [2024-12-05 13:35:22.527969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.209 [2024-12-05 13:35:22.527982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.209 [2024-12-05 13:35:22.527991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.209 [2024-12-05 13:35:22.527997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.209 [2024-12-05 13:35:22.528012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.209 qpair failed and we were unable to recover it. 00:31:00.209 [2024-12-05 13:35:22.537952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.209 [2024-12-05 13:35:22.538010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.209 [2024-12-05 13:35:22.538023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.209 [2024-12-05 13:35:22.538030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.209 [2024-12-05 13:35:22.538037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.209 [2024-12-05 13:35:22.538051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.209 qpair failed and we were unable to recover it. 00:31:00.209 [2024-12-05 13:35:22.547956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.548016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.548029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.548036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.548043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.548057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.558002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.558055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.558068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.558079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.558086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.558100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.567957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.568007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.568020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.568027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.568034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.568047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.578064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.578119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.578132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.578140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.578146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.578160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.588135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.588220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.588233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.588241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.588248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.588262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.598124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.598206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.598219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.598226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.598233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.598250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.608103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.608150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.608163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.608170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.608177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.608191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.618142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.618200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.618214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.618221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.618228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.618241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.628197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.628256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.628270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.628277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.628283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.628297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.638123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.638180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.638194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.210 [2024-12-05 13:35:22.638201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.210 [2024-12-05 13:35:22.638208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.210 [2024-12-05 13:35:22.638221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.210 qpair failed and we were unable to recover it. 00:31:00.210 [2024-12-05 13:35:22.648099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.210 [2024-12-05 13:35:22.648147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.210 [2024-12-05 13:35:22.648162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.648170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.648176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.648192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.658296] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.658385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.658401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.658409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.658416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.658431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.668293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.668371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.668385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.668393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.668401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.668416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.678319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.678372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.678386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.678393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.678400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.678414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.688314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.688362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.688378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.688386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.688393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.688407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.698395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.698500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.698514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.698521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.698528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.698542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.708445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.708505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.708518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.708526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.708532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.708546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.718445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.718498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.718513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.718520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.718527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.718541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.728431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.728477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.728491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.728498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.728505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.728522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.738502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.738557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.738572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.738582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.738589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.738603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.748539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.211 [2024-12-05 13:35:22.748594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.211 [2024-12-05 13:35:22.748609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.211 [2024-12-05 13:35:22.748617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.211 [2024-12-05 13:35:22.748623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.211 [2024-12-05 13:35:22.748638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.211 qpair failed and we were unable to recover it. 00:31:00.211 [2024-12-05 13:35:22.758548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.212 [2024-12-05 13:35:22.758608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.212 [2024-12-05 13:35:22.758633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.212 [2024-12-05 13:35:22.758643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.212 [2024-12-05 13:35:22.758650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.212 [2024-12-05 13:35:22.758670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.212 qpair failed and we were unable to recover it. 00:31:00.212 [2024-12-05 13:35:22.768527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.212 [2024-12-05 13:35:22.768609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.212 [2024-12-05 13:35:22.768625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.212 [2024-12-05 13:35:22.768634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.212 [2024-12-05 13:35:22.768640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.212 [2024-12-05 13:35:22.768655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.212 qpair failed and we were unable to recover it. 00:31:00.473 [2024-12-05 13:35:22.778613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.473 [2024-12-05 13:35:22.778669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.473 [2024-12-05 13:35:22.778684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.473 [2024-12-05 13:35:22.778692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.473 [2024-12-05 13:35:22.778698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.473 [2024-12-05 13:35:22.778713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.473 qpair failed and we were unable to recover it. 00:31:00.473 [2024-12-05 13:35:22.788639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.473 [2024-12-05 13:35:22.788698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.473 [2024-12-05 13:35:22.788712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.788720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.788726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.788740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.798632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.798713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.798728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.798738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.798746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.798761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.808649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.808699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.808712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.808720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.808726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.808741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.818714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.818770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.818787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.818795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.818802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.818816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.828740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.828792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.828807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.828814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.828821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.828835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.838743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.838797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.838811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.838819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.838825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.838839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.848762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.848854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.848874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.848883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.848891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.848907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.858825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.858884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.858898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.858906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.858913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.858931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.868879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.868939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.868952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.868960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.868968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.868982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.878812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.878872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.878886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.474 [2024-12-05 13:35:22.878894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.474 [2024-12-05 13:35:22.878900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.474 [2024-12-05 13:35:22.878915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.474 qpair failed and we were unable to recover it. 00:31:00.474 [2024-12-05 13:35:22.888860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.474 [2024-12-05 13:35:22.888908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.474 [2024-12-05 13:35:22.888922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.888929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.888936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.888950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.898919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.899009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.899023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.899030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.899037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.899050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.908964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.909027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.909041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.909048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.909055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.909069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.919006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.919058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.919073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.919081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.919088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.919102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.929001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.929054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.929068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.929075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.929082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.929096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.939053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.939111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.939125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.939132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.939139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.939152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.949111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.949165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.949183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.949191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.949198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.949212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.959089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.959175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.959188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.959196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.959203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.959216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.969098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.969159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.969173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.969180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.969187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.969200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.979172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.979231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.979244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.979252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.475 [2024-12-05 13:35:22.979258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.475 [2024-12-05 13:35:22.979272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.475 qpair failed and we were unable to recover it. 00:31:00.475 [2024-12-05 13:35:22.989186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.475 [2024-12-05 13:35:22.989244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.475 [2024-12-05 13:35:22.989258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.475 [2024-12-05 13:35:22.989265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.476 [2024-12-05 13:35:22.989272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.476 [2024-12-05 13:35:22.989290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.476 qpair failed and we were unable to recover it. 00:31:00.476 [2024-12-05 13:35:22.999237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.476 [2024-12-05 13:35:22.999295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.476 [2024-12-05 13:35:22.999309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.476 [2024-12-05 13:35:22.999316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.476 [2024-12-05 13:35:22.999323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.476 [2024-12-05 13:35:22.999336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.476 qpair failed and we were unable to recover it. 00:31:00.476 [2024-12-05 13:35:23.009219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.476 [2024-12-05 13:35:23.009270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.476 [2024-12-05 13:35:23.009284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.476 [2024-12-05 13:35:23.009292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.476 [2024-12-05 13:35:23.009298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.476 [2024-12-05 13:35:23.009313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.476 qpair failed and we were unable to recover it. 00:31:00.476 [2024-12-05 13:35:23.019302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.476 [2024-12-05 13:35:23.019361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.476 [2024-12-05 13:35:23.019374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.476 [2024-12-05 13:35:23.019382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.476 [2024-12-05 13:35:23.019388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.476 [2024-12-05 13:35:23.019402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.476 qpair failed and we were unable to recover it. 00:31:00.476 [2024-12-05 13:35:23.029321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.476 [2024-12-05 13:35:23.029383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.476 [2024-12-05 13:35:23.029397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.476 [2024-12-05 13:35:23.029404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.476 [2024-12-05 13:35:23.029411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.476 [2024-12-05 13:35:23.029425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.476 qpair failed and we were unable to recover it. 00:31:00.738 [2024-12-05 13:35:23.039206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.738 [2024-12-05 13:35:23.039308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.738 [2024-12-05 13:35:23.039322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.738 [2024-12-05 13:35:23.039329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.738 [2024-12-05 13:35:23.039336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.738 [2024-12-05 13:35:23.039350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.738 qpair failed and we were unable to recover it. 00:31:00.738 [2024-12-05 13:35:23.049190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.738 [2024-12-05 13:35:23.049239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.738 [2024-12-05 13:35:23.049253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.738 [2024-12-05 13:35:23.049260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.738 [2024-12-05 13:35:23.049266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.738 [2024-12-05 13:35:23.049281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.738 qpair failed and we were unable to recover it. 00:31:00.738 [2024-12-05 13:35:23.059391] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.738 [2024-12-05 13:35:23.059444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.738 [2024-12-05 13:35:23.059458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.738 [2024-12-05 13:35:23.059465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.738 [2024-12-05 13:35:23.059472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.738 [2024-12-05 13:35:23.059485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.069431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.069483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.069496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.069503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.069509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.069523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.079447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.079498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.079515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.079523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.079529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.079543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.089437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.089494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.089508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.089516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.089523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.089537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.099525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.099613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.099628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.099636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.099642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.099656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.109416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.109486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.109499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.109507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.109514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.109528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.119527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.119576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.119590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.119598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.119605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.119623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.129544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.129597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.129611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.129618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.129625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.129639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.139626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.139688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.139714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.139724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.139732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.139753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.149592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.149644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.149660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.149667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.149675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.149691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.739 qpair failed and we were unable to recover it. 00:31:00.739 [2024-12-05 13:35:23.159500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.739 [2024-12-05 13:35:23.159590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.739 [2024-12-05 13:35:23.159604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.739 [2024-12-05 13:35:23.159612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.739 [2024-12-05 13:35:23.159618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.739 [2024-12-05 13:35:23.159633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.169549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.169612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.169626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.169633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.169641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.169656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.179724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.179781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.179795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.179802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.179809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.179823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.189734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.189783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.189797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.189805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.189811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.189826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.199728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.199783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.199797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.199804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.199811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.199825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.209750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.209797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.209814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.209822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.209828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.209844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.219819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.219880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.219894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.219901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.219908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.219922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.229841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.229894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.229907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.229914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.229921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.229935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.239835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.239908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.239922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.239929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.239936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.239950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.249858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.249971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.249984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.249992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.250002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.740 [2024-12-05 13:35:23.250017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.740 qpair failed and we were unable to recover it. 00:31:00.740 [2024-12-05 13:35:23.259916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.740 [2024-12-05 13:35:23.259986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.740 [2024-12-05 13:35:23.259999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.740 [2024-12-05 13:35:23.260007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.740 [2024-12-05 13:35:23.260013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.741 [2024-12-05 13:35:23.260027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.741 qpair failed and we were unable to recover it. 00:31:00.741 [2024-12-05 13:35:23.269926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.741 [2024-12-05 13:35:23.270028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.741 [2024-12-05 13:35:23.270042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.741 [2024-12-05 13:35:23.270050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.741 [2024-12-05 13:35:23.270057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.741 [2024-12-05 13:35:23.270071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.741 qpair failed and we were unable to recover it. 00:31:00.741 [2024-12-05 13:35:23.279948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.741 [2024-12-05 13:35:23.280045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.741 [2024-12-05 13:35:23.280058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.741 [2024-12-05 13:35:23.280066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.741 [2024-12-05 13:35:23.280073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.741 [2024-12-05 13:35:23.280088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.741 qpair failed and we were unable to recover it. 00:31:00.741 [2024-12-05 13:35:23.289959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.741 [2024-12-05 13:35:23.290008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.741 [2024-12-05 13:35:23.290021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.741 [2024-12-05 13:35:23.290029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.741 [2024-12-05 13:35:23.290036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.741 [2024-12-05 13:35:23.290050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.741 qpair failed and we were unable to recover it. 00:31:00.741 [2024-12-05 13:35:23.300041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:00.741 [2024-12-05 13:35:23.300127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:00.741 [2024-12-05 13:35:23.300140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:00.741 [2024-12-05 13:35:23.300148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:00.741 [2024-12-05 13:35:23.300155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:00.741 [2024-12-05 13:35:23.300169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:00.741 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.309935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.309991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.310004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.310012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.310018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.310032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.320040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.320093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.320106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.320114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.320120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.320134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.329961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.330018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.330032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.330039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.330046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.330061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.340032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.340090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.340107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.340114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.340121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.340135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.350166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.350220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.350233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.350240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.350247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.350262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.360132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.360214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.360227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.360235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.360242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.360256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.370184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.370227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.370241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.370248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.370254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.370269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.380253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.380314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.380327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.380335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.380345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.380359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.390217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.390269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.390283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.390290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.003 [2024-12-05 13:35:23.390296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.003 [2024-12-05 13:35:23.390310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.003 qpair failed and we were unable to recover it. 00:31:01.003 [2024-12-05 13:35:23.400252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.003 [2024-12-05 13:35:23.400311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.003 [2024-12-05 13:35:23.400324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.003 [2024-12-05 13:35:23.400331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.400338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.400352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.410285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.410335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.410349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.410357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.410364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.410377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.420342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.420413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.420427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.420435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.420441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.420455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.430225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.430281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.430294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.430302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.430308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.430322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.440340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.440388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.440401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.440408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.440415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.440429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.450394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.450447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.450461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.450469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.450475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.450489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.460493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.460565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.460578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.460585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.460592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.460606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.470469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.470529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.470547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.470555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.470561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.470575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.480464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.480525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.480539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.480546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.480553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.480566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.490518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.490612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.490625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.490633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.004 [2024-12-05 13:35:23.490639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.004 [2024-12-05 13:35:23.490653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.004 qpair failed and we were unable to recover it. 00:31:01.004 [2024-12-05 13:35:23.500584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.004 [2024-12-05 13:35:23.500637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.004 [2024-12-05 13:35:23.500651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.004 [2024-12-05 13:35:23.500658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.005 [2024-12-05 13:35:23.500665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.005 [2024-12-05 13:35:23.500678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.005 qpair failed and we were unable to recover it. 00:31:01.005 [2024-12-05 13:35:23.510559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.005 [2024-12-05 13:35:23.510622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.005 [2024-12-05 13:35:23.510636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.005 [2024-12-05 13:35:23.510643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.005 [2024-12-05 13:35:23.510653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.005 [2024-12-05 13:35:23.510667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.005 qpair failed and we were unable to recover it. 00:31:01.005 [2024-12-05 13:35:23.520446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.005 [2024-12-05 13:35:23.520495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.005 [2024-12-05 13:35:23.520509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.005 [2024-12-05 13:35:23.520516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.005 [2024-12-05 13:35:23.520523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.005 [2024-12-05 13:35:23.520537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.005 qpair failed and we were unable to recover it. 00:31:01.005 [2024-12-05 13:35:23.530604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.005 [2024-12-05 13:35:23.530705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.005 [2024-12-05 13:35:23.530720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.005 [2024-12-05 13:35:23.530727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.005 [2024-12-05 13:35:23.530737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.005 [2024-12-05 13:35:23.530752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.005 qpair failed and we were unable to recover it. 00:31:01.005 [2024-12-05 13:35:23.540671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.005 [2024-12-05 13:35:23.540725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.005 [2024-12-05 13:35:23.540739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.005 [2024-12-05 13:35:23.540746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.005 [2024-12-05 13:35:23.540753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.005 [2024-12-05 13:35:23.540767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.005 qpair failed and we were unable to recover it. 00:31:01.005 [2024-12-05 13:35:23.550677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.005 [2024-12-05 13:35:23.550733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.005 [2024-12-05 13:35:23.550746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.005 [2024-12-05 13:35:23.550754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.005 [2024-12-05 13:35:23.550760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.005 [2024-12-05 13:35:23.550774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.005 qpair failed and we were unable to recover it. 00:31:01.005 [2024-12-05 13:35:23.560705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.005 [2024-12-05 13:35:23.560753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.005 [2024-12-05 13:35:23.560767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.005 [2024-12-05 13:35:23.560774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.005 [2024-12-05 13:35:23.560781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.005 [2024-12-05 13:35:23.560795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.005 qpair failed and we were unable to recover it. 00:31:01.267 [2024-12-05 13:35:23.570733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.267 [2024-12-05 13:35:23.570784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.267 [2024-12-05 13:35:23.570799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.267 [2024-12-05 13:35:23.570806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.267 [2024-12-05 13:35:23.570813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.267 [2024-12-05 13:35:23.570826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.267 qpair failed and we were unable to recover it. 00:31:01.267 [2024-12-05 13:35:23.580756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.267 [2024-12-05 13:35:23.580835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.267 [2024-12-05 13:35:23.580849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.267 [2024-12-05 13:35:23.580856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.267 [2024-12-05 13:35:23.580866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.267 [2024-12-05 13:35:23.580881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.267 qpair failed and we were unable to recover it. 00:31:01.267 [2024-12-05 13:35:23.590770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.267 [2024-12-05 13:35:23.590829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.267 [2024-12-05 13:35:23.590842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.267 [2024-12-05 13:35:23.590850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.267 [2024-12-05 13:35:23.590856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.267 [2024-12-05 13:35:23.590874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.267 qpair failed and we were unable to recover it. 00:31:01.267 [2024-12-05 13:35:23.600793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.267 [2024-12-05 13:35:23.600840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.267 [2024-12-05 13:35:23.600865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.267 [2024-12-05 13:35:23.600873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.267 [2024-12-05 13:35:23.600880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.267 [2024-12-05 13:35:23.600894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.267 qpair failed and we were unable to recover it. 00:31:01.267 [2024-12-05 13:35:23.610823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.267 [2024-12-05 13:35:23.610873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.267 [2024-12-05 13:35:23.610886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.267 [2024-12-05 13:35:23.610894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.267 [2024-12-05 13:35:23.610900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.267 [2024-12-05 13:35:23.610914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.267 qpair failed and we were unable to recover it. 00:31:01.267 [2024-12-05 13:35:23.620843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.267 [2024-12-05 13:35:23.620932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.267 [2024-12-05 13:35:23.620946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.267 [2024-12-05 13:35:23.620953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.267 [2024-12-05 13:35:23.620960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.267 [2024-12-05 13:35:23.620974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.267 qpair failed and we were unable to recover it. 00:31:01.267 [2024-12-05 13:35:23.630883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.267 [2024-12-05 13:35:23.630936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.267 [2024-12-05 13:35:23.630950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.267 [2024-12-05 13:35:23.630957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.630964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.630978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.640894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.640942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.640955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.640963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.640973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.640987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.650916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.650962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.650975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.650983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.650990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.651003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.660926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.660977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.660992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.661000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.661007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.661022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.670993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.671055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.671069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.671076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.671083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.671097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.681020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.681116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.681130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.681138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.681144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.681159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.690890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.690936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.690950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.690957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.690964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.690978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.701064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.701117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.701130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.701137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.701144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.701158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.711099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.711145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.711158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.711165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.711172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.711185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.721063] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.721115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.721128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.721136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.721142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.268 [2024-12-05 13:35:23.721156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.268 qpair failed and we were unable to recover it. 00:31:01.268 [2024-12-05 13:35:23.731108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.268 [2024-12-05 13:35:23.731160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.268 [2024-12-05 13:35:23.731176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.268 [2024-12-05 13:35:23.731184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.268 [2024-12-05 13:35:23.731190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.731204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.741169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.741222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.741235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.741243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.741249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.741263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.751187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.751240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.751254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.751261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.751267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.751281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.761206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.761299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.761313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.761320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.761327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.761341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.771209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.771250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.771263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.771271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.771280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.771295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.781258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.781304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.781318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.781325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.781332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.781346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.791293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.791338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.791352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.791359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.791365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.791379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.801274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.801319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.801332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.801340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.801346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.801360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.811332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.811405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.811418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.811426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.811432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.811446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.821240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.821287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.821300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.821307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.821314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.821329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.269 [2024-12-05 13:35:23.831269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.269 [2024-12-05 13:35:23.831316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.269 [2024-12-05 13:35:23.831329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.269 [2024-12-05 13:35:23.831337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.269 [2024-12-05 13:35:23.831343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.269 [2024-12-05 13:35:23.831357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.269 qpair failed and we were unable to recover it. 00:31:01.531 [2024-12-05 13:35:23.841405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.531 [2024-12-05 13:35:23.841448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.531 [2024-12-05 13:35:23.841461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.531 [2024-12-05 13:35:23.841469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.531 [2024-12-05 13:35:23.841475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.531 [2024-12-05 13:35:23.841489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.531 qpair failed and we were unable to recover it. 00:31:01.531 [2024-12-05 13:35:23.851430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.531 [2024-12-05 13:35:23.851477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.531 [2024-12-05 13:35:23.851490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.531 [2024-12-05 13:35:23.851498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.531 [2024-12-05 13:35:23.851504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.531 [2024-12-05 13:35:23.851518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.531 qpair failed and we were unable to recover it. 00:31:01.531 [2024-12-05 13:35:23.861482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.531 [2024-12-05 13:35:23.861531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.531 [2024-12-05 13:35:23.861548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.531 [2024-12-05 13:35:23.861555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.531 [2024-12-05 13:35:23.861562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.531 [2024-12-05 13:35:23.861576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.531 qpair failed and we were unable to recover it. 00:31:01.531 [2024-12-05 13:35:23.871504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.531 [2024-12-05 13:35:23.871598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.531 [2024-12-05 13:35:23.871611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.531 [2024-12-05 13:35:23.871619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.531 [2024-12-05 13:35:23.871626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.531 [2024-12-05 13:35:23.871639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.531 qpair failed and we were unable to recover it. 00:31:01.531 [2024-12-05 13:35:23.881500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.531 [2024-12-05 13:35:23.881546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.531 [2024-12-05 13:35:23.881559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.531 [2024-12-05 13:35:23.881566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.881573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.881586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.891627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.891670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.891684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.891691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.891698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.891711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.901613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.901664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.901689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.901698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.901710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.901730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.911653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.911736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.911751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.911758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.911765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.911780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.921645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.921738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.921753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.921761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.921769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.921784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.931679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.931743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.931757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.931764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.931771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.931785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.941741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.941789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.941802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.941809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.941816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.941830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.951694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.951746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.951759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.951767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.951773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.951787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.961748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.961794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.961807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.961815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.961821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.961835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.971650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.971695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.971708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.532 [2024-12-05 13:35:23.971716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.532 [2024-12-05 13:35:23.971722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.532 [2024-12-05 13:35:23.971736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.532 qpair failed and we were unable to recover it. 00:31:01.532 [2024-12-05 13:35:23.981794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.532 [2024-12-05 13:35:23.981849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.532 [2024-12-05 13:35:23.981866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:23.981874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:23.981880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:23.981895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:23.991790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:23.991843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:23.991859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:23.991869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:23.991876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:23.991890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.001842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.001944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.001958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.001965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.001971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.001986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.011857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.011905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.011919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.011926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.011933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.011946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.021905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.021956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.021971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.021978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.021985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.021999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.031946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.032012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.032026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.032034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.032043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.032057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.041957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.042015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.042028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.042036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.042042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.042056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.051967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.052012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.052025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.052032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.052039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.052053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.062006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.062095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.062109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.062118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.062124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.062138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.072001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.072049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.072062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.533 [2024-12-05 13:35:24.072069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.533 [2024-12-05 13:35:24.072076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.533 [2024-12-05 13:35:24.072090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.533 qpair failed and we were unable to recover it. 00:31:01.533 [2024-12-05 13:35:24.081972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.533 [2024-12-05 13:35:24.082024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.533 [2024-12-05 13:35:24.082037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.534 [2024-12-05 13:35:24.082045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.534 [2024-12-05 13:35:24.082051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.534 [2024-12-05 13:35:24.082065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.534 qpair failed and we were unable to recover it. 00:31:01.534 [2024-12-05 13:35:24.092108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.534 [2024-12-05 13:35:24.092161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.534 [2024-12-05 13:35:24.092175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.534 [2024-12-05 13:35:24.092182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.534 [2024-12-05 13:35:24.092189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.534 [2024-12-05 13:35:24.092202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.534 qpair failed and we were unable to recover it. 00:31:01.796 [2024-12-05 13:35:24.101997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.796 [2024-12-05 13:35:24.102046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.796 [2024-12-05 13:35:24.102060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.796 [2024-12-05 13:35:24.102067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.796 [2024-12-05 13:35:24.102074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.796 [2024-12-05 13:35:24.102088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.796 qpair failed and we were unable to recover it. 00:31:01.796 [2024-12-05 13:35:24.112172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.796 [2024-12-05 13:35:24.112219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.796 [2024-12-05 13:35:24.112232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.796 [2024-12-05 13:35:24.112239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.796 [2024-12-05 13:35:24.112246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.796 [2024-12-05 13:35:24.112260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.122037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.122084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.122101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.122109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.122115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.122130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.132195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.132243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.132257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.132264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.132271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.132285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.142243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.142341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.142355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.142362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.142369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.142383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.152255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.152308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.152321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.152329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.152335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.152349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.162279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.162325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.162338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.162346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.162355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.162369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.172309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.172402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.172416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.172424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.172432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.172446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.182201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.182251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.182266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.182274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.182281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.182294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.192236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.192282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.192296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.192303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.192310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.192324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.202369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.202421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.202435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.797 [2024-12-05 13:35:24.202442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.797 [2024-12-05 13:35:24.202449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.797 [2024-12-05 13:35:24.202463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.797 qpair failed and we were unable to recover it. 00:31:01.797 [2024-12-05 13:35:24.212419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.797 [2024-12-05 13:35:24.212467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.797 [2024-12-05 13:35:24.212480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.212488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.212495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.212509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.222412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.222458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.222472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.222479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.222486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.222499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.232339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.232388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.232404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.232412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.232418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.232434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.242543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.242611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.242626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.242633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.242643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.242657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.252508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.252605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.252623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.252631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.252638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.252652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.262531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.262605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.262619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.262626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.262633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.262647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.272436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.272528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.272542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.272550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.272556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.272570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.282595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.282639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.282653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.282660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.282667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.282681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.292610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.292658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.292671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.292679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.292689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.292704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.302603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.302674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.798 [2024-12-05 13:35:24.302688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.798 [2024-12-05 13:35:24.302695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.798 [2024-12-05 13:35:24.302701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.798 [2024-12-05 13:35:24.302716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.798 qpair failed and we were unable to recover it. 00:31:01.798 [2024-12-05 13:35:24.312703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.798 [2024-12-05 13:35:24.312756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.799 [2024-12-05 13:35:24.312769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.799 [2024-12-05 13:35:24.312777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.799 [2024-12-05 13:35:24.312783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.799 [2024-12-05 13:35:24.312797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.799 qpair failed and we were unable to recover it. 00:31:01.799 [2024-12-05 13:35:24.322680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.799 [2024-12-05 13:35:24.322733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.799 [2024-12-05 13:35:24.322746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.799 [2024-12-05 13:35:24.322753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.799 [2024-12-05 13:35:24.322760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.799 [2024-12-05 13:35:24.322773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.799 qpair failed and we were unable to recover it. 00:31:01.799 [2024-12-05 13:35:24.332722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.799 [2024-12-05 13:35:24.332765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.799 [2024-12-05 13:35:24.332777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.799 [2024-12-05 13:35:24.332785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.799 [2024-12-05 13:35:24.332791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.799 [2024-12-05 13:35:24.332805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.799 qpair failed and we were unable to recover it. 00:31:01.799 [2024-12-05 13:35:24.342739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.799 [2024-12-05 13:35:24.342784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.799 [2024-12-05 13:35:24.342798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.799 [2024-12-05 13:35:24.342805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.799 [2024-12-05 13:35:24.342811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.799 [2024-12-05 13:35:24.342825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.799 qpair failed and we were unable to recover it. 00:31:01.799 [2024-12-05 13:35:24.352755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:01.799 [2024-12-05 13:35:24.352802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:01.799 [2024-12-05 13:35:24.352815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:01.799 [2024-12-05 13:35:24.352823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:01.799 [2024-12-05 13:35:24.352829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:01.799 [2024-12-05 13:35:24.352843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:01.799 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.362792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.362848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.362865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.362873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.362879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.362893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.372828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.372877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.372891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.372898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.372905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.372919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.382855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.382908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.382925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.382933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.382939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.382954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.392767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.392817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.392830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.392838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.392845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.392858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.402941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.403014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.403028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.403035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.403043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.403057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.412942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.413024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.413037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.413045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.413052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.413066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.423009] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.423095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.423108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.423116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.423126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.423140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.433054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.433125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.433138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.433146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.433152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.433167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.442888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.063 [2024-12-05 13:35:24.442938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.063 [2024-12-05 13:35:24.442952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.063 [2024-12-05 13:35:24.442959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.063 [2024-12-05 13:35:24.442966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.063 [2024-12-05 13:35:24.442979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.063 qpair failed and we were unable to recover it. 00:31:02.063 [2024-12-05 13:35:24.453082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.453170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.453184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.453192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.453199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.453213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.462954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.463002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.463016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.463023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.463030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.463044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.473238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.473312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.473326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.473333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.473341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.473355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.483015] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.483064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.483078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.483086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.483092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.483106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.493157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.493201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.493214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.493221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.493228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.493242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.503048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.503099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.503112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.503120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.503127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.503140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.513222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.513270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.513286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.513294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.513300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.513313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.523197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.523241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.523255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.523262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.523268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.523281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.533260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.533305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.533318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.533326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.533332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.533346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.543294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.064 [2024-12-05 13:35:24.543349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.064 [2024-12-05 13:35:24.543363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.064 [2024-12-05 13:35:24.543370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.064 [2024-12-05 13:35:24.543377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.064 [2024-12-05 13:35:24.543390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.064 qpair failed and we were unable to recover it. 00:31:02.064 [2024-12-05 13:35:24.553359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.553408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.553421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.553428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.553438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.553452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.065 [2024-12-05 13:35:24.563293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.563339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.563354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.563362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.563370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.563387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.065 [2024-12-05 13:35:24.573366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.573415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.573429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.573437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.573443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.573457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.065 [2024-12-05 13:35:24.583406] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.583452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.583466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.583474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.583481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.583495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.065 [2024-12-05 13:35:24.593432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.593479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.593492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.593500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.593506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.593520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.065 [2024-12-05 13:35:24.603325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.603373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.603387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.603394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.603400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.603414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.065 [2024-12-05 13:35:24.613473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.613524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.613538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.613545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.613552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.613566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.065 [2024-12-05 13:35:24.623464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.065 [2024-12-05 13:35:24.623509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.065 [2024-12-05 13:35:24.623523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.065 [2024-12-05 13:35:24.623531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.065 [2024-12-05 13:35:24.623537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.065 [2024-12-05 13:35:24.623551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.065 qpair failed and we were unable to recover it. 00:31:02.328 [2024-12-05 13:35:24.633533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.328 [2024-12-05 13:35:24.633595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.328 [2024-12-05 13:35:24.633608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.328 [2024-12-05 13:35:24.633616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.328 [2024-12-05 13:35:24.633622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.328 [2024-12-05 13:35:24.633636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.328 qpair failed and we were unable to recover it. 00:31:02.328 [2024-12-05 13:35:24.643559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.328 [2024-12-05 13:35:24.643612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.328 [2024-12-05 13:35:24.643641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.328 [2024-12-05 13:35:24.643650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.328 [2024-12-05 13:35:24.643658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.328 [2024-12-05 13:35:24.643678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.328 qpair failed and we were unable to recover it. 00:31:02.328 [2024-12-05 13:35:24.653569] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.653626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.653652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.653661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.653668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.653689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.663603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.663707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.663734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.663743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.663751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.663770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.673644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.673698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.673714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.673721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.673728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.673744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.683542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.683587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.683603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.683610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.683621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.683637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.693692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.693750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.693764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.693772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.693779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.693793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.703605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.703657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.703671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.703679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.703685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.703700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.713744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.713792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.713805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.713813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.713820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.713834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.723775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.723827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.723841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.723848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.723854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.723872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.733789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.733838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.733852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.733859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.733870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.733884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.743834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.329 [2024-12-05 13:35:24.743881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.329 [2024-12-05 13:35:24.743896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.329 [2024-12-05 13:35:24.743903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.329 [2024-12-05 13:35:24.743910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.329 [2024-12-05 13:35:24.743924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.329 qpair failed and we were unable to recover it. 00:31:02.329 [2024-12-05 13:35:24.753881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.753927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.753941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.753949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.753956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.753970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.763843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.763897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.763910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.763918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.763924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.763939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.773905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.773951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.773968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.773976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.773982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.773997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.783935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.783982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.783995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.784003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.784010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.784024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.793836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.793929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.793943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.793950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.793957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.793971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.803961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.804003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.804017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.804025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.804031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.804045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.814020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.814063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.814076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.814084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.814094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.814108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.824039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.824083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.824096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.824104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.824110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.824124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.833943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.833994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.834007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.834015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.834022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.330 [2024-12-05 13:35:24.834035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.330 qpair failed and we were unable to recover it. 00:31:02.330 [2024-12-05 13:35:24.844127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.330 [2024-12-05 13:35:24.844202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.330 [2024-12-05 13:35:24.844215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.330 [2024-12-05 13:35:24.844222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.330 [2024-12-05 13:35:24.844230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.331 [2024-12-05 13:35:24.844243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.331 qpair failed and we were unable to recover it. 00:31:02.331 [2024-12-05 13:35:24.854194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.331 [2024-12-05 13:35:24.854236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.331 [2024-12-05 13:35:24.854249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.331 [2024-12-05 13:35:24.854257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.331 [2024-12-05 13:35:24.854263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.331 [2024-12-05 13:35:24.854277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.331 qpair failed and we were unable to recover it. 00:31:02.331 [2024-12-05 13:35:24.864157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.331 [2024-12-05 13:35:24.864239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.331 [2024-12-05 13:35:24.864252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.331 [2024-12-05 13:35:24.864259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.331 [2024-12-05 13:35:24.864267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.331 [2024-12-05 13:35:24.864281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.331 qpair failed and we were unable to recover it. 00:31:02.331 [2024-12-05 13:35:24.874209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.331 [2024-12-05 13:35:24.874260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.331 [2024-12-05 13:35:24.874274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.331 [2024-12-05 13:35:24.874281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.331 [2024-12-05 13:35:24.874288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.331 [2024-12-05 13:35:24.874302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.331 qpair failed and we were unable to recover it. 00:31:02.331 [2024-12-05 13:35:24.884164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.331 [2024-12-05 13:35:24.884216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.331 [2024-12-05 13:35:24.884230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.331 [2024-12-05 13:35:24.884237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.331 [2024-12-05 13:35:24.884244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.331 [2024-12-05 13:35:24.884258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.331 qpair failed and we were unable to recover it. 00:31:02.593 [2024-12-05 13:35:24.894214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.593 [2024-12-05 13:35:24.894312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.593 [2024-12-05 13:35:24.894326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.593 [2024-12-05 13:35:24.894333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.593 [2024-12-05 13:35:24.894340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.593 [2024-12-05 13:35:24.894354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.593 qpair failed and we were unable to recover it. 00:31:02.593 [2024-12-05 13:35:24.904249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.593 [2024-12-05 13:35:24.904298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.593 [2024-12-05 13:35:24.904315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.593 [2024-12-05 13:35:24.904323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.593 [2024-12-05 13:35:24.904329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.593 [2024-12-05 13:35:24.904343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.593 qpair failed and we were unable to recover it. 00:31:02.593 [2024-12-05 13:35:24.914297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.593 [2024-12-05 13:35:24.914349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.593 [2024-12-05 13:35:24.914362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.593 [2024-12-05 13:35:24.914370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.593 [2024-12-05 13:35:24.914377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.593 [2024-12-05 13:35:24.914390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.593 qpair failed and we were unable to recover it. 00:31:02.593 [2024-12-05 13:35:24.924303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.593 [2024-12-05 13:35:24.924354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.593 [2024-12-05 13:35:24.924367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.924375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.924381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.594 [2024-12-05 13:35:24.924396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:24.934187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:24.934235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:24.934249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.934257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.934263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.594 [2024-12-05 13:35:24.934277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:24.944353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:24.944399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:24.944413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.944420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.944430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.594 [2024-12-05 13:35:24.944444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:24.954399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:24.954447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:24.954461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.954468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.954475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.594 [2024-12-05 13:35:24.954489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:24.964404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:24.964454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:24.964468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.964476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.964482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.594 [2024-12-05 13:35:24.964495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:24.974450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:24.974502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:24.974516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.974523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.974530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1d3a490 00:31:02.594 [2024-12-05 13:35:24.974544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:24.984473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:24.984601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:24.984667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.984692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.984713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd860000b90 00:31:02.594 [2024-12-05 13:35:24.984768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:24.994513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:24.994564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:24.994583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:24.994589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:24.994595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd858000b90 00:31:02.594 [2024-12-05 13:35:24.994609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:25.004508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:31:02.594 [2024-12-05 13:35:25.004547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:31:02.594 [2024-12-05 13:35:25.004558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:31:02.594 [2024-12-05 13:35:25.004563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:02.594 [2024-12-05 13:35:25.004568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd858000b90 00:31:02.594 [2024-12-05 13:35:25.004579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:31:02.594 qpair failed and we were unable to recover it. 00:31:02.594 [2024-12-05 13:35:25.004725] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:31:02.594 A controller has encountered a failure and is being reset. 00:31:02.594 Controller properly reset. 00:31:02.594 Initializing NVMe Controllers 00:31:02.594 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.594 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:02.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:31:02.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:31:02.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:31:02.594 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:31:02.594 Initialization complete. Launching workers. 00:31:02.594 Starting thread on core 1 00:31:02.594 Starting thread on core 2 00:31:02.594 Starting thread on core 3 00:31:02.594 Starting thread on core 0 00:31:02.594 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:31:02.594 00:31:02.594 real 0m11.420s 00:31:02.594 user 0m21.739s 00:31:02.594 sys 0m3.616s 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:31:02.595 ************************************ 00:31:02.595 END TEST nvmf_target_disconnect_tc2 00:31:02.595 ************************************ 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:02.595 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:02.595 rmmod nvme_tcp 00:31:02.855 rmmod nvme_fabrics 00:31:02.855 rmmod nvme_keyring 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1135912 ']' 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1135912 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1135912 ']' 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1135912 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1135912 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1135912' 00:31:02.855 killing process with pid 1135912 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1135912 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1135912 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:02.855 13:35:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.399 13:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:05.399 00:31:05.399 real 0m22.850s 00:31:05.399 user 0m49.999s 00:31:05.399 sys 0m10.473s 00:31:05.399 13:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.399 13:35:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:31:05.399 ************************************ 00:31:05.399 END TEST nvmf_target_disconnect 00:31:05.399 ************************************ 00:31:05.399 13:35:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:05.399 00:31:05.399 real 6m47.760s 00:31:05.399 user 11m28.993s 00:31:05.399 sys 2m24.620s 00:31:05.399 13:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:05.399 13:35:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:05.399 ************************************ 00:31:05.399 END TEST nvmf_host 00:31:05.399 ************************************ 00:31:05.399 13:35:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:05.399 13:35:27 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:05.399 13:35:27 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:05.399 13:35:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:05.399 13:35:27 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.399 13:35:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:05.399 ************************************ 00:31:05.399 START TEST nvmf_target_core_interrupt_mode 00:31:05.399 ************************************ 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:05.399 * Looking for test storage... 00:31:05.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.399 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:05.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.400 --rc genhtml_branch_coverage=1 00:31:05.400 --rc genhtml_function_coverage=1 00:31:05.400 --rc genhtml_legend=1 00:31:05.400 --rc geninfo_all_blocks=1 00:31:05.400 --rc geninfo_unexecuted_blocks=1 00:31:05.400 00:31:05.400 ' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:05.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.400 --rc genhtml_branch_coverage=1 00:31:05.400 --rc genhtml_function_coverage=1 00:31:05.400 --rc genhtml_legend=1 00:31:05.400 --rc geninfo_all_blocks=1 00:31:05.400 --rc geninfo_unexecuted_blocks=1 00:31:05.400 00:31:05.400 ' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:05.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.400 --rc genhtml_branch_coverage=1 00:31:05.400 --rc genhtml_function_coverage=1 00:31:05.400 --rc genhtml_legend=1 00:31:05.400 --rc geninfo_all_blocks=1 00:31:05.400 --rc geninfo_unexecuted_blocks=1 00:31:05.400 00:31:05.400 ' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:05.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.400 --rc genhtml_branch_coverage=1 00:31:05.400 --rc genhtml_function_coverage=1 00:31:05.400 --rc genhtml_legend=1 00:31:05.400 --rc geninfo_all_blocks=1 00:31:05.400 --rc geninfo_unexecuted_blocks=1 00:31:05.400 00:31:05.400 ' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:05.400 ************************************ 00:31:05.400 START TEST nvmf_abort 00:31:05.400 ************************************ 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:05.400 * Looking for test storage... 00:31:05.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:05.400 13:35:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:05.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.662 --rc genhtml_branch_coverage=1 00:31:05.662 --rc genhtml_function_coverage=1 00:31:05.662 --rc genhtml_legend=1 00:31:05.662 --rc geninfo_all_blocks=1 00:31:05.662 --rc geninfo_unexecuted_blocks=1 00:31:05.662 00:31:05.662 ' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:05.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.662 --rc genhtml_branch_coverage=1 00:31:05.662 --rc genhtml_function_coverage=1 00:31:05.662 --rc genhtml_legend=1 00:31:05.662 --rc geninfo_all_blocks=1 00:31:05.662 --rc geninfo_unexecuted_blocks=1 00:31:05.662 00:31:05.662 ' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:05.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.662 --rc genhtml_branch_coverage=1 00:31:05.662 --rc genhtml_function_coverage=1 00:31:05.662 --rc genhtml_legend=1 00:31:05.662 --rc geninfo_all_blocks=1 00:31:05.662 --rc geninfo_unexecuted_blocks=1 00:31:05.662 00:31:05.662 ' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:05.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:05.662 --rc genhtml_branch_coverage=1 00:31:05.662 --rc genhtml_function_coverage=1 00:31:05.662 --rc genhtml_legend=1 00:31:05.662 --rc geninfo_all_blocks=1 00:31:05.662 --rc geninfo_unexecuted_blocks=1 00:31:05.662 00:31:05.662 ' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:31:05.662 13:35:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:31:13.799 13:35:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:13.799 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:13.799 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:13.799 Found net devices under 0000:31:00.0: cvl_0_0 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:13.799 Found net devices under 0000:31:00.1: cvl_0_1 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.799 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:13.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.421 ms 00:31:13.800 00:31:13.800 --- 10.0.0.2 ping statistics --- 00:31:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.800 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:31:13.800 00:31:13.800 --- 10.0.0.1 ping statistics --- 00:31:13.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.800 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:13.800 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:14.060 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1142018 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1142018 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1142018 ']' 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:14.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:14.061 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.061 [2024-12-05 13:35:36.444810] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:14.061 [2024-12-05 13:35:36.445790] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:31:14.061 [2024-12-05 13:35:36.445825] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:14.061 [2024-12-05 13:35:36.547782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:14.061 [2024-12-05 13:35:36.582987] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:14.061 [2024-12-05 13:35:36.583022] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:14.061 [2024-12-05 13:35:36.583030] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:14.061 [2024-12-05 13:35:36.583037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:14.061 [2024-12-05 13:35:36.583043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:14.061 [2024-12-05 13:35:36.584358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:14.061 [2024-12-05 13:35:36.584512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:14.061 [2024-12-05 13:35:36.584513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:14.320 [2024-12-05 13:35:36.641005] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:14.320 [2024-12-05 13:35:36.641088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:14.320 [2024-12-05 13:35:36.641603] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:14.320 [2024-12-05 13:35:36.641958] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.320 [2024-12-05 13:35:36.713300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.320 Malloc0 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.320 Delay0 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.320 [2024-12-05 13:35:36.813282] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:14.320 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.321 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:14.321 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.321 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.321 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:14.321 13:35:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:14.580 [2024-12-05 13:35:36.897231] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:16.497 Initializing NVMe Controllers 00:31:16.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:16.497 controller IO queue size 128 less than required 00:31:16.497 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:16.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:16.497 Initialization complete. Launching workers. 00:31:16.497 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27978 00:31:16.497 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28035, failed to submit 66 00:31:16.497 success 27978, unsuccessful 57, failed 0 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:16.497 13:35:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:16.497 rmmod nvme_tcp 00:31:16.497 rmmod nvme_fabrics 00:31:16.497 rmmod nvme_keyring 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1142018 ']' 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1142018 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1142018 ']' 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1142018 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:16.497 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1142018 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1142018' 00:31:16.834 killing process with pid 1142018 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1142018 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1142018 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.834 13:35:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:19.380 00:31:19.380 real 0m13.495s 00:31:19.380 user 0m10.927s 00:31:19.380 sys 0m7.292s 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:19.380 ************************************ 00:31:19.380 END TEST nvmf_abort 00:31:19.380 ************************************ 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:19.380 ************************************ 00:31:19.380 START TEST nvmf_ns_hotplug_stress 00:31:19.380 ************************************ 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:19.380 * Looking for test storage... 00:31:19.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:19.380 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:19.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.381 --rc genhtml_branch_coverage=1 00:31:19.381 --rc genhtml_function_coverage=1 00:31:19.381 --rc genhtml_legend=1 00:31:19.381 --rc geninfo_all_blocks=1 00:31:19.381 --rc geninfo_unexecuted_blocks=1 00:31:19.381 00:31:19.381 ' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:19.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.381 --rc genhtml_branch_coverage=1 00:31:19.381 --rc genhtml_function_coverage=1 00:31:19.381 --rc genhtml_legend=1 00:31:19.381 --rc geninfo_all_blocks=1 00:31:19.381 --rc geninfo_unexecuted_blocks=1 00:31:19.381 00:31:19.381 ' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:19.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.381 --rc genhtml_branch_coverage=1 00:31:19.381 --rc genhtml_function_coverage=1 00:31:19.381 --rc genhtml_legend=1 00:31:19.381 --rc geninfo_all_blocks=1 00:31:19.381 --rc geninfo_unexecuted_blocks=1 00:31:19.381 00:31:19.381 ' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:19.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:19.381 --rc genhtml_branch_coverage=1 00:31:19.381 --rc genhtml_function_coverage=1 00:31:19.381 --rc genhtml_legend=1 00:31:19.381 --rc geninfo_all_blocks=1 00:31:19.381 --rc geninfo_unexecuted_blocks=1 00:31:19.381 00:31:19.381 ' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:31:19.381 13:35:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:27.531 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:27.531 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:27.531 Found net devices under 0000:31:00.0: cvl_0_0 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:27.531 Found net devices under 0000:31:00.1: cvl_0_1 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:27.531 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:27.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:27.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:31:27.532 00:31:27.532 --- 10.0.0.2 ping statistics --- 00:31:27.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.532 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:27.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:27.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:31:27.532 00:31:27.532 --- 10.0.0.1 ping statistics --- 00:31:27.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:27.532 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1147118 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1147118 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1147118 ']' 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:27.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:27.532 13:35:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:27.532 [2024-12-05 13:35:50.049292] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:27.532 [2024-12-05 13:35:50.050491] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:31:27.532 [2024-12-05 13:35:50.050544] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:27.793 [2024-12-05 13:35:50.160025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:27.793 [2024-12-05 13:35:50.206326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:27.793 [2024-12-05 13:35:50.206380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:27.793 [2024-12-05 13:35:50.206389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:27.793 [2024-12-05 13:35:50.206396] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:27.793 [2024-12-05 13:35:50.206403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:27.793 [2024-12-05 13:35:50.208269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:27.793 [2024-12-05 13:35:50.208431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:27.793 [2024-12-05 13:35:50.208432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:27.793 [2024-12-05 13:35:50.281181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:27.793 [2024-12-05 13:35:50.281201] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:27.793 [2024-12-05 13:35:50.281972] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:27.793 [2024-12-05 13:35:50.282188] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:28.365 13:35:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:28.626 [2024-12-05 13:35:51.057258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:28.626 13:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:28.887 13:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:28.887 [2024-12-05 13:35:51.410142] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.887 13:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:29.149 13:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:29.410 Malloc0 00:31:29.410 13:35:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:29.671 Delay0 00:31:29.671 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:29.671 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:29.931 NULL1 00:31:29.931 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:30.193 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1147750 00:31:30.193 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:30.193 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:30.193 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.193 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.454 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:30.454 13:35:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:30.714 true 00:31:30.714 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:30.714 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.714 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:30.976 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:30.976 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:31.236 true 00:31:31.236 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:31.236 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.236 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:31.497 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:31.497 13:35:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:31.757 true 00:31:31.757 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:31.757 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.017 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.017 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:32.017 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:32.277 true 00:31:32.277 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:32.277 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.537 13:35:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:32.537 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:32.537 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:32.798 true 00:31:32.798 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:32.798 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.057 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:33.057 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:33.057 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:33.317 true 00:31:33.317 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:33.317 13:35:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.698 Read completed with error (sct=0, sc=11) 00:31:34.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.698 13:35:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:34.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:34.698 13:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:34.698 13:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:34.959 true 00:31:34.959 13:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:34.959 13:35:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:35.902 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:35.902 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:35.902 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:36.161 true 00:31:36.161 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:36.161 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:36.161 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:36.421 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:36.421 13:35:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:36.682 true 00:31:36.682 13:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:36.682 13:35:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:37.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.623 13:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:37.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.883 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:37.883 13:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:37.883 13:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:38.143 true 00:31:38.143 13:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:38.143 13:36:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:39.081 13:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.081 13:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:39.081 13:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:39.342 true 00:31:39.342 13:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:39.342 13:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:39.603 13:36:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.603 13:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:39.603 13:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:39.863 true 00:31:39.863 13:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:39.863 13:36:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:41.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.250 13:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:41.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:41.250 13:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:41.250 13:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:41.510 true 00:31:41.510 13:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:41.510 13:36:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.450 13:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.450 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:42.450 13:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:42.450 13:36:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:42.710 true 00:31:42.710 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:42.710 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:42.710 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.970 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:42.970 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:43.231 true 00:31:43.231 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:43.231 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.231 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.492 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:43.492 13:36:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:43.753 true 00:31:43.753 13:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:43.753 13:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:43.753 13:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.014 13:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:44.014 13:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:44.275 true 00:31:44.275 13:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:44.275 13:36:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.657 13:36:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.657 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:45.657 13:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:45.657 13:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:45.657 true 00:31:45.657 13:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:45.657 13:36:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.596 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:46.596 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.856 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:46.856 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:46.856 true 00:31:46.856 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:46.856 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.117 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.377 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:47.377 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:47.377 true 00:31:47.377 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:47.377 13:36:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.639 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.900 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:47.900 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:47.900 true 00:31:47.900 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:47.900 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.161 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:48.423 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:48.423 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:48.423 true 00:31:48.423 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:48.423 13:36:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:49.808 13:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:49.809 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:49.809 13:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:49.809 13:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:50.069 true 00:31:50.069 13:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:50.069 13:36:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.011 13:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.011 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:51.011 13:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:51.011 13:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:51.272 true 00:31:51.272 13:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:51.272 13:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.532 13:36:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.532 13:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:51.532 13:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:51.793 true 00:31:51.793 13:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:51.793 13:36:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.180 13:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.180 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:53.180 13:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:53.180 13:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:53.440 true 00:31:53.440 13:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:53.440 13:36:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.380 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.380 13:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.380 13:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:54.380 13:36:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:54.641 true 00:31:54.641 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:54.641 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.641 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.901 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:54.901 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:55.162 true 00:31:55.162 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:55.162 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.422 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.422 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:55.422 13:36:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:55.683 true 00:31:55.683 13:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:55.683 13:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:55.943 13:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:55.943 13:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:55.943 13:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:56.205 true 00:31:56.205 13:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:56.205 13:36:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:57.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:57.588 13:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:57.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:57.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:57.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:57.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:57.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:57.588 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:57.588 13:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:57.588 13:36:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:57.588 true 00:31:57.848 13:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:57.848 13:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.418 13:36:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.677 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:58.677 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:58.936 true 00:31:58.936 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:58.936 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:59.196 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:59.196 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:59.196 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:59.455 true 00:31:59.455 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:31:59.455 13:36:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.836 Initializing NVMe Controllers 00:32:00.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:00.836 Controller IO queue size 128, less than required. 00:32:00.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:00.836 Controller IO queue size 128, less than required. 00:32:00.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:00.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:00.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:00.836 Initialization complete. Launching workers. 00:32:00.836 ======================================================== 00:32:00.836 Latency(us) 00:32:00.836 Device Information : IOPS MiB/s Average min max 00:32:00.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1921.60 0.94 37862.56 2159.88 1060369.75 00:32:00.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16805.57 8.21 7591.41 1480.62 593674.67 00:32:00.836 ======================================================== 00:32:00.836 Total : 18727.17 9.14 10697.54 1480.62 1060369.75 00:32:00.836 00:32:00.836 13:36:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.836 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:32:00.836 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:32:00.836 true 00:32:00.836 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147750 00:32:00.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1147750) - No such process 00:32:00.836 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1147750 00:32:00.836 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:01.113 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:01.374 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:01.374 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:01.374 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:01.374 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.374 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:01.374 null0 00:32:01.375 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.375 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.375 13:36:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:01.635 null1 00:32:01.635 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.635 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.635 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:01.896 null2 00:32:01.896 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.896 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.896 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:01.896 null3 00:32:01.896 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:01.896 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:01.896 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:02.156 null4 00:32:02.156 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:02.156 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:02.156 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:02.156 null5 00:32:02.156 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:02.156 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:02.156 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:02.416 null6 00:32:02.416 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:02.416 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:02.416 13:36:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:02.678 null7 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1154413 1154414 1154417 1154418 1154420 1154422 1154424 1154426 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:02.678 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:02.968 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.229 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:03.490 13:36:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:03.490 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:03.490 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:03.750 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.011 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.271 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:04.272 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:04.272 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:04.272 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:04.532 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.533 13:36:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:04.533 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:04.533 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:04.533 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.533 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:04.533 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:04.533 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:04.533 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:04.794 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:05.055 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:05.317 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.579 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:05.841 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:06.102 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:06.103 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:06.103 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:06.103 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:06.103 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:06.103 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:06.103 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.103 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:06.364 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:06.364 rmmod nvme_tcp 00:32:06.364 rmmod nvme_fabrics 00:32:06.624 rmmod nvme_keyring 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1147118 ']' 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1147118 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1147118 ']' 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1147118 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:06.624 13:36:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147118 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147118' 00:32:06.624 killing process with pid 1147118 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1147118 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1147118 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.624 13:36:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:09.194 00:32:09.194 real 0m49.790s 00:32:09.194 user 2m57.577s 00:32:09.194 sys 0m20.950s 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:09.194 ************************************ 00:32:09.194 END TEST nvmf_ns_hotplug_stress 00:32:09.194 ************************************ 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:09.194 ************************************ 00:32:09.194 START TEST nvmf_delete_subsystem 00:32:09.194 ************************************ 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:09.194 * Looking for test storage... 00:32:09.194 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:09.194 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.195 --rc genhtml_branch_coverage=1 00:32:09.195 --rc genhtml_function_coverage=1 00:32:09.195 --rc genhtml_legend=1 00:32:09.195 --rc geninfo_all_blocks=1 00:32:09.195 --rc geninfo_unexecuted_blocks=1 00:32:09.195 00:32:09.195 ' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.195 --rc genhtml_branch_coverage=1 00:32:09.195 --rc genhtml_function_coverage=1 00:32:09.195 --rc genhtml_legend=1 00:32:09.195 --rc geninfo_all_blocks=1 00:32:09.195 --rc geninfo_unexecuted_blocks=1 00:32:09.195 00:32:09.195 ' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.195 --rc genhtml_branch_coverage=1 00:32:09.195 --rc genhtml_function_coverage=1 00:32:09.195 --rc genhtml_legend=1 00:32:09.195 --rc geninfo_all_blocks=1 00:32:09.195 --rc geninfo_unexecuted_blocks=1 00:32:09.195 00:32:09.195 ' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:09.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:09.195 --rc genhtml_branch_coverage=1 00:32:09.195 --rc genhtml_function_coverage=1 00:32:09.195 --rc genhtml_legend=1 00:32:09.195 --rc geninfo_all_blocks=1 00:32:09.195 --rc geninfo_unexecuted_blocks=1 00:32:09.195 00:32:09.195 ' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:09.195 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:32:09.196 13:36:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:17.474 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:17.475 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:17.475 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:17.475 Found net devices under 0000:31:00.0: cvl_0_0 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:17.475 Found net devices under 0000:31:00.1: cvl_0_1 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:17.475 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:17.475 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:32:17.475 00:32:17.475 --- 10.0.0.2 ping statistics --- 00:32:17.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.475 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:17.475 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:17.475 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:32:17.475 00:32:17.475 --- 10.0.0.1 ping statistics --- 00:32:17.475 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:17.475 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1159971 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1159971 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:17.475 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1159971 ']' 00:32:17.476 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.476 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.476 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.476 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.476 13:36:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:17.476 [2024-12-05 13:36:39.443426] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:17.476 [2024-12-05 13:36:39.444587] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:32:17.476 [2024-12-05 13:36:39.444639] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:17.476 [2024-12-05 13:36:39.536841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:17.476 [2024-12-05 13:36:39.576894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:17.476 [2024-12-05 13:36:39.576927] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:17.476 [2024-12-05 13:36:39.576935] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:17.476 [2024-12-05 13:36:39.576941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:17.476 [2024-12-05 13:36:39.576947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:17.476 [2024-12-05 13:36:39.578286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:17.476 [2024-12-05 13:36:39.578288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:17.476 [2024-12-05 13:36:39.635088] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:17.476 [2024-12-05 13:36:39.635702] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:17.476 [2024-12-05 13:36:39.636042] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:17.745 [2024-12-05 13:36:40.298940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.745 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.005 [2024-12-05 13:36:40.327492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.005 NULL1 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.005 Delay0 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1160215 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:18.005 13:36:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:18.005 [2024-12-05 13:36:40.432756] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:19.916 13:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:19.916 13:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:19.916 13:36:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.916 starting I/O failed: -6 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.916 starting I/O failed: -6 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 starting I/O failed: -6 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 starting I/O failed: -6 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.916 Read completed with error (sct=0, sc=8) 00:32:19.916 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 starting I/O failed: -6 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 starting I/O failed: -6 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 starting I/O failed: -6 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 starting I/O failed: -6 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 starting I/O failed: -6 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 starting I/O failed: -6 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 starting I/O failed: -6 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 [2024-12-05 13:36:42.481654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2106f00 is same with the state(6) to be set 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Write completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:19.917 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 starting I/O failed: -6 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 [2024-12-05 13:36:42.486382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2150000c40 is same with the state(6) to be set 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Write completed with error (sct=0, sc=8) 00:32:20.201 Read completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Write completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Write completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Write completed with error (sct=0, sc=8) 00:32:20.202 Write completed with error (sct=0, sc=8) 00:32:20.202 Write completed with error (sct=0, sc=8) 00:32:20.202 Write completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Read completed with error (sct=0, sc=8) 00:32:20.202 Write completed with error (sct=0, sc=8) 00:32:21.143 [2024-12-05 13:36:43.449194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21085f0 is same with the state(6) to be set 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 [2024-12-05 13:36:43.484911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21070e0 is same with the state(6) to be set 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Write completed with error (sct=0, sc=8) 00:32:21.143 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 [2024-12-05 13:36:43.485354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21074a0 is same with the state(6) to be set 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 [2024-12-05 13:36:43.488229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f215000d020 is same with the state(6) to be set 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 Write completed with error (sct=0, sc=8) 00:32:21.144 Read completed with error (sct=0, sc=8) 00:32:21.144 [2024-12-05 13:36:43.488500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f215000d7e0 is same with the state(6) to be set 00:32:21.144 Initializing NVMe Controllers 00:32:21.144 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:21.144 Controller IO queue size 128, less than required. 00:32:21.144 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:21.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:21.144 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:21.144 Initialization complete. Launching workers. 00:32:21.144 ======================================================== 00:32:21.144 Latency(us) 00:32:21.144 Device Information : IOPS MiB/s Average min max 00:32:21.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.78 0.08 894813.75 245.73 1006754.25 00:32:21.144 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.81 0.08 912608.15 290.80 1012248.41 00:32:21.144 ======================================================== 00:32:21.144 Total : 331.59 0.16 903497.20 245.73 1012248.41 00:32:21.144 00:32:21.144 [2024-12-05 13:36:43.489134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21085f0 (9): Bad file descriptor 00:32:21.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:21.144 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.144 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:21.144 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1160215 00:32:21.144 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1160215 00:32:21.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1160215) - No such process 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1160215 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1160215 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:21.716 13:36:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1160215 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 [2024-12-05 13:36:44.023717] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1160936 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:21.716 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:21.716 [2024-12-05 13:36:44.095965] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:22.289 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:22.289 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:22.289 13:36:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:22.558 13:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:22.558 13:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:22.558 13:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:23.128 13:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:23.128 13:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:23.128 13:36:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:23.698 13:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:23.698 13:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:23.698 13:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:24.268 13:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:24.268 13:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:24.268 13:36:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:24.529 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:24.529 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:24.529 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:24.789 Initializing NVMe Controllers 00:32:24.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:24.789 Controller IO queue size 128, less than required. 00:32:24.789 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:24.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:24.790 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:24.790 Initialization complete. Launching workers. 00:32:24.790 ======================================================== 00:32:24.790 Latency(us) 00:32:24.790 Device Information : IOPS MiB/s Average min max 00:32:24.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002351.69 1000196.18 1006813.35 00:32:24.790 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004681.17 1000363.05 1041986.32 00:32:24.790 ======================================================== 00:32:24.790 Total : 256.00 0.12 1003516.43 1000196.18 1041986.32 00:32:24.790 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1160936 00:32:25.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1160936) - No such process 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1160936 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.051 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.051 rmmod nvme_tcp 00:32:25.051 rmmod nvme_fabrics 00:32:25.312 rmmod nvme_keyring 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1159971 ']' 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1159971 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1159971 ']' 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1159971 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159971 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159971' 00:32:25.312 killing process with pid 1159971 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1159971 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1159971 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.312 13:36:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.859 13:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:27.859 00:32:27.859 real 0m18.623s 00:32:27.859 user 0m26.170s 00:32:27.859 sys 0m7.787s 00:32:27.859 13:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:27.859 13:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:27.859 ************************************ 00:32:27.859 END TEST nvmf_delete_subsystem 00:32:27.859 ************************************ 00:32:27.859 13:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:27.859 13:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:27.859 13:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:27.859 13:36:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:27.859 ************************************ 00:32:27.859 START TEST nvmf_host_management 00:32:27.859 ************************************ 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:27.859 * Looking for test storage... 00:32:27.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:27.859 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:27.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.860 --rc genhtml_branch_coverage=1 00:32:27.860 --rc genhtml_function_coverage=1 00:32:27.860 --rc genhtml_legend=1 00:32:27.860 --rc geninfo_all_blocks=1 00:32:27.860 --rc geninfo_unexecuted_blocks=1 00:32:27.860 00:32:27.860 ' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:27.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.860 --rc genhtml_branch_coverage=1 00:32:27.860 --rc genhtml_function_coverage=1 00:32:27.860 --rc genhtml_legend=1 00:32:27.860 --rc geninfo_all_blocks=1 00:32:27.860 --rc geninfo_unexecuted_blocks=1 00:32:27.860 00:32:27.860 ' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:27.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.860 --rc genhtml_branch_coverage=1 00:32:27.860 --rc genhtml_function_coverage=1 00:32:27.860 --rc genhtml_legend=1 00:32:27.860 --rc geninfo_all_blocks=1 00:32:27.860 --rc geninfo_unexecuted_blocks=1 00:32:27.860 00:32:27.860 ' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:27.860 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:27.860 --rc genhtml_branch_coverage=1 00:32:27.860 --rc genhtml_function_coverage=1 00:32:27.860 --rc genhtml_legend=1 00:32:27.860 --rc geninfo_all_blocks=1 00:32:27.860 --rc geninfo_unexecuted_blocks=1 00:32:27.860 00:32:27.860 ' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:27.860 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:32:27.861 13:36:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:35.997 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:35.997 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:35.997 Found net devices under 0000:31:00.0: cvl_0_0 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:35.997 Found net devices under 0000:31:00.1: cvl_0_1 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:35.997 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:35.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:35.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:32:35.998 00:32:35.998 --- 10.0.0.2 ping statistics --- 00:32:35.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.998 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:35.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:35.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:35.998 00:32:35.998 --- 10.0.0.1 ping statistics --- 00:32:35.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:35.998 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:35.998 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1166359 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1166359 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1166359 ']' 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.259 13:36:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:36.259 [2024-12-05 13:36:58.665641] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:36.259 [2024-12-05 13:36:58.666802] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:32:36.259 [2024-12-05 13:36:58.666856] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.259 [2024-12-05 13:36:58.775154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:36.518 [2024-12-05 13:36:58.828270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.519 [2024-12-05 13:36:58.828321] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.519 [2024-12-05 13:36:58.828330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.519 [2024-12-05 13:36:58.828337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.519 [2024-12-05 13:36:58.828344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.519 [2024-12-05 13:36:58.830387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.519 [2024-12-05 13:36:58.830556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.519 [2024-12-05 13:36:58.830723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.519 [2024-12-05 13:36:58.830723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:36.519 [2024-12-05 13:36:58.907706] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:36.519 [2024-12-05 13:36:58.908427] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:36.519 [2024-12-05 13:36:58.909168] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:36.519 [2024-12-05 13:36:58.909397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:36.519 [2024-12-05 13:36:58.909532] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:37.088 [2024-12-05 13:36:59.527607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:37.088 Malloc0 00:32:37.088 [2024-12-05 13:36:59.619755] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.088 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:37.347 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1166451 00:32:37.347 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1166451 /var/tmp/bdevperf.sock 00:32:37.347 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1166451 ']' 00:32:37.347 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:37.347 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.347 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:37.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:37.347 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:37.348 { 00:32:37.348 "params": { 00:32:37.348 "name": "Nvme$subsystem", 00:32:37.348 "trtype": "$TEST_TRANSPORT", 00:32:37.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:37.348 "adrfam": "ipv4", 00:32:37.348 "trsvcid": "$NVMF_PORT", 00:32:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:37.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:37.348 "hdgst": ${hdgst:-false}, 00:32:37.348 "ddgst": ${ddgst:-false} 00:32:37.348 }, 00:32:37.348 "method": "bdev_nvme_attach_controller" 00:32:37.348 } 00:32:37.348 EOF 00:32:37.348 )") 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:37.348 13:36:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:37.348 "params": { 00:32:37.348 "name": "Nvme0", 00:32:37.348 "trtype": "tcp", 00:32:37.348 "traddr": "10.0.0.2", 00:32:37.348 "adrfam": "ipv4", 00:32:37.348 "trsvcid": "4420", 00:32:37.348 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:37.348 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:37.348 "hdgst": false, 00:32:37.348 "ddgst": false 00:32:37.348 }, 00:32:37.348 "method": "bdev_nvme_attach_controller" 00:32:37.348 }' 00:32:37.348 [2024-12-05 13:36:59.732480] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:32:37.348 [2024-12-05 13:36:59.732535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166451 ] 00:32:37.348 [2024-12-05 13:36:59.810473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.348 [2024-12-05 13:36:59.847193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.607 Running I/O for 10 seconds... 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=578 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 578 -ge 100 ']' 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.178 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:38.178 [2024-12-05 13:37:00.587254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.178 [2024-12-05 13:37:00.587294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.178 [2024-12-05 13:37:00.587303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.178 [2024-12-05 13:37:00.587310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.178 [2024-12-05 13:37:00.587317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.178 [2024-12-05 13:37:00.587324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.178 [2024-12-05 13:37:00.587331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.587406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17993e0 is same with the state(6) to be set 00:32:38.179 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.179 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:38.179 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.179 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:38.179 [2024-12-05 13:37:00.598104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.179 [2024-12-05 13:37:00.598140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.179 [2024-12-05 13:37:00.598159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.179 [2024-12-05 13:37:00.598175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:38.179 [2024-12-05 13:37:00.598191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70ab10 is same with the state(6) to be set 00:32:38.179 [2024-12-05 13:37:00.598548] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:32:38.179 [2024-12-05 13:37:00.598620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.179 [2024-12-05 13:37:00.598973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.179 [2024-12-05 13:37:00.598983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.598990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.180 [2024-12-05 13:37:00.599496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.180 [2024-12-05 13:37:00.599504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:38.181 [2024-12-05 13:37:00.599725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:38.181 [2024-12-05 13:37:00.599734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x71b270 is same with the state(6) to be set 00:32:38.181 [2024-12-05 13:37:00.600960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:38.181 task offset: 83328 on job bdev=Nvme0n1 fails 00:32:38.181 00:32:38.181 Latency(us) 00:32:38.181 [2024-12-05T12:37:00.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.181 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:38.181 Job: Nvme0n1 ended in about 0.45 seconds with error 00:32:38.181 Verification LBA range: start 0x0 length 0x400 00:32:38.181 Nvme0n1 : 0.45 1438.38 89.90 143.61 0.00 39311.25 2061.65 34515.63 00:32:38.181 [2024-12-05T12:37:00.749Z] =================================================================================================================== 00:32:38.181 [2024-12-05T12:37:00.749Z] Total : 1438.38 89.90 143.61 0.00 39311.25 2061.65 34515.63 00:32:38.181 [2024-12-05 13:37:00.602949] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:38.181 [2024-12-05 13:37:00.602970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70ab10 (9): Bad file descriptor 00:32:38.181 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.181 13:37:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:38.181 [2024-12-05 13:37:00.608742] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1166451 00:32:39.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1166451) - No such process 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:39.119 { 00:32:39.119 "params": { 00:32:39.119 "name": "Nvme$subsystem", 00:32:39.119 "trtype": "$TEST_TRANSPORT", 00:32:39.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:39.119 "adrfam": "ipv4", 00:32:39.119 "trsvcid": "$NVMF_PORT", 00:32:39.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:39.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:39.119 "hdgst": ${hdgst:-false}, 00:32:39.119 "ddgst": ${ddgst:-false} 00:32:39.119 }, 00:32:39.119 "method": "bdev_nvme_attach_controller" 00:32:39.119 } 00:32:39.119 EOF 00:32:39.119 )") 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:39.119 13:37:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:39.119 "params": { 00:32:39.119 "name": "Nvme0", 00:32:39.119 "trtype": "tcp", 00:32:39.119 "traddr": "10.0.0.2", 00:32:39.119 "adrfam": "ipv4", 00:32:39.119 "trsvcid": "4420", 00:32:39.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:39.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:39.119 "hdgst": false, 00:32:39.119 "ddgst": false 00:32:39.119 }, 00:32:39.119 "method": "bdev_nvme_attach_controller" 00:32:39.119 }' 00:32:39.119 [2024-12-05 13:37:01.663880] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:32:39.119 [2024-12-05 13:37:01.663935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166824 ] 00:32:39.381 [2024-12-05 13:37:01.745299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.381 [2024-12-05 13:37:01.781011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:39.643 Running I/O for 1 seconds... 00:32:40.587 1470.00 IOPS, 91.88 MiB/s 00:32:40.587 Latency(us) 00:32:40.587 [2024-12-05T12:37:03.155Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:40.587 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:40.587 Verification LBA range: start 0x0 length 0x400 00:32:40.587 Nvme0n1 : 1.04 1474.88 92.18 0.00 0.00 42685.33 8792.75 36700.16 00:32:40.587 [2024-12-05T12:37:03.155Z] =================================================================================================================== 00:32:40.587 [2024-12-05T12:37:03.156Z] Total : 1474.88 92.18 0.00 0.00 42685.33 8792.75 36700.16 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:40.588 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:40.588 rmmod nvme_tcp 00:32:40.884 rmmod nvme_fabrics 00:32:40.884 rmmod nvme_keyring 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1166359 ']' 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1166359 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1166359 ']' 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1166359 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1166359 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1166359' 00:32:40.884 killing process with pid 1166359 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1166359 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1166359 00:32:40.884 [2024-12-05 13:37:03.380315] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:40.884 13:37:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:43.427 00:32:43.427 real 0m15.478s 00:32:43.427 user 0m19.232s 00:32:43.427 sys 0m8.102s 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:43.427 ************************************ 00:32:43.427 END TEST nvmf_host_management 00:32:43.427 ************************************ 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:43.427 ************************************ 00:32:43.427 START TEST nvmf_lvol 00:32:43.427 ************************************ 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:43.427 * Looking for test storage... 00:32:43.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.427 --rc genhtml_branch_coverage=1 00:32:43.427 --rc genhtml_function_coverage=1 00:32:43.427 --rc genhtml_legend=1 00:32:43.427 --rc geninfo_all_blocks=1 00:32:43.427 --rc geninfo_unexecuted_blocks=1 00:32:43.427 00:32:43.427 ' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.427 --rc genhtml_branch_coverage=1 00:32:43.427 --rc genhtml_function_coverage=1 00:32:43.427 --rc genhtml_legend=1 00:32:43.427 --rc geninfo_all_blocks=1 00:32:43.427 --rc geninfo_unexecuted_blocks=1 00:32:43.427 00:32:43.427 ' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.427 --rc genhtml_branch_coverage=1 00:32:43.427 --rc genhtml_function_coverage=1 00:32:43.427 --rc genhtml_legend=1 00:32:43.427 --rc geninfo_all_blocks=1 00:32:43.427 --rc geninfo_unexecuted_blocks=1 00:32:43.427 00:32:43.427 ' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:43.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:43.427 --rc genhtml_branch_coverage=1 00:32:43.427 --rc genhtml_function_coverage=1 00:32:43.427 --rc genhtml_legend=1 00:32:43.427 --rc geninfo_all_blocks=1 00:32:43.427 --rc geninfo_unexecuted_blocks=1 00:32:43.427 00:32:43.427 ' 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:43.427 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:43.428 13:37:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:51.574 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:51.574 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:51.574 Found net devices under 0000:31:00.0: cvl_0_0 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:51.574 Found net devices under 0000:31:00.1: cvl_0_1 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:51.574 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:51.575 13:37:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:51.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:32:51.575 00:32:51.575 --- 10.0.0.2 ping statistics --- 00:32:51.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.575 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:32:51.575 00:32:51.575 --- 10.0.0.1 ping statistics --- 00:32:51.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.575 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1171816 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1171816 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1171816 ']' 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.575 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:51.836 [2024-12-05 13:37:14.175433] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:51.836 [2024-12-05 13:37:14.177158] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:32:51.836 [2024-12-05 13:37:14.177235] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.836 [2024-12-05 13:37:14.270994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:51.836 [2024-12-05 13:37:14.312967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.836 [2024-12-05 13:37:14.313003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.836 [2024-12-05 13:37:14.313011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.836 [2024-12-05 13:37:14.313018] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.836 [2024-12-05 13:37:14.313024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.836 [2024-12-05 13:37:14.314490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.836 [2024-12-05 13:37:14.314607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:51.836 [2024-12-05 13:37:14.314609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.836 [2024-12-05 13:37:14.371763] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.836 [2024-12-05 13:37:14.372338] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:51.836 [2024-12-05 13:37:14.372575] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:51.836 [2024-12-05 13:37:14.372806] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:52.776 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.776 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:52.777 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.777 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.777 13:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:52.777 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.777 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:52.777 [2024-12-05 13:37:15.171489] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.777 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.036 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:53.036 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:53.036 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:53.036 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:53.296 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:53.556 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=61ddae08-60ad-4631-a1da-427841f4cb55 00:32:53.556 13:37:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 61ddae08-60ad-4631-a1da-427841f4cb55 lvol 20 00:32:53.816 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b767e508-73f9-4360-a114-4520fdb2aef7 00:32:53.816 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:53.816 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b767e508-73f9-4360-a114-4520fdb2aef7 00:32:54.075 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:54.075 [2024-12-05 13:37:16.611294] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:54.075 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:54.336 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1172509 00:32:54.336 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:54.336 13:37:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:55.275 13:37:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b767e508-73f9-4360-a114-4520fdb2aef7 MY_SNAPSHOT 00:32:55.534 13:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=86dea374-2a61-41c5-86ac-3a9eba7c5756 00:32:55.534 13:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b767e508-73f9-4360-a114-4520fdb2aef7 30 00:32:55.794 13:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 86dea374-2a61-41c5-86ac-3a9eba7c5756 MY_CLONE 00:32:56.055 13:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c257629f-2d81-4287-8ec3-c61a3890ea8c 00:32:56.055 13:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c257629f-2d81-4287-8ec3-c61a3890ea8c 00:32:56.626 13:37:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1172509 00:33:04.784 Initializing NVMe Controllers 00:33:04.784 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:33:04.784 Controller IO queue size 128, less than required. 00:33:04.784 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:04.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:33:04.784 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:33:04.784 Initialization complete. Launching workers. 00:33:04.784 ======================================================== 00:33:04.784 Latency(us) 00:33:04.784 Device Information : IOPS MiB/s Average min max 00:33:04.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12382.50 48.37 10339.70 1611.91 53012.96 00:33:04.784 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15106.50 59.01 8473.11 3897.87 54961.64 00:33:04.784 ======================================================== 00:33:04.784 Total : 27489.00 107.38 9313.92 1611.91 54961.64 00:33:04.784 00:33:04.784 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:04.784 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b767e508-73f9-4360-a114-4520fdb2aef7 00:33:05.045 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 61ddae08-60ad-4631-a1da-427841f4cb55 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.307 rmmod nvme_tcp 00:33:05.307 rmmod nvme_fabrics 00:33:05.307 rmmod nvme_keyring 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1171816 ']' 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1171816 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1171816 ']' 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1171816 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1171816 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1171816' 00:33:05.307 killing process with pid 1171816 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1171816 00:33:05.307 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1171816 00:33:05.567 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.567 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.567 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.567 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:33:05.567 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:33:05.567 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.567 13:37:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.567 13:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.567 13:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.567 13:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.567 13:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.567 13:37:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.113 00:33:08.113 real 0m24.514s 00:33:08.113 user 0m55.905s 00:33:08.113 sys 0m11.258s 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:33:08.113 ************************************ 00:33:08.113 END TEST nvmf_lvol 00:33:08.113 ************************************ 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:08.113 ************************************ 00:33:08.113 START TEST nvmf_lvs_grow 00:33:08.113 ************************************ 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:33:08.113 * Looking for test storage... 00:33:08.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:33:08.113 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:08.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.114 --rc genhtml_branch_coverage=1 00:33:08.114 --rc genhtml_function_coverage=1 00:33:08.114 --rc genhtml_legend=1 00:33:08.114 --rc geninfo_all_blocks=1 00:33:08.114 --rc geninfo_unexecuted_blocks=1 00:33:08.114 00:33:08.114 ' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:08.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.114 --rc genhtml_branch_coverage=1 00:33:08.114 --rc genhtml_function_coverage=1 00:33:08.114 --rc genhtml_legend=1 00:33:08.114 --rc geninfo_all_blocks=1 00:33:08.114 --rc geninfo_unexecuted_blocks=1 00:33:08.114 00:33:08.114 ' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:08.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.114 --rc genhtml_branch_coverage=1 00:33:08.114 --rc genhtml_function_coverage=1 00:33:08.114 --rc genhtml_legend=1 00:33:08.114 --rc geninfo_all_blocks=1 00:33:08.114 --rc geninfo_unexecuted_blocks=1 00:33:08.114 00:33:08.114 ' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:08.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.114 --rc genhtml_branch_coverage=1 00:33:08.114 --rc genhtml_function_coverage=1 00:33:08.114 --rc genhtml_legend=1 00:33:08.114 --rc geninfo_all_blocks=1 00:33:08.114 --rc geninfo_unexecuted_blocks=1 00:33:08.114 00:33:08.114 ' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.114 13:37:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:33:16.444 Found 0000:31:00.0 (0x8086 - 0x159b) 00:33:16.444 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:33:16.445 Found 0000:31:00.1 (0x8086 - 0x159b) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:33:16.445 Found net devices under 0000:31:00.0: cvl_0_0 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:33:16.445 Found net devices under 0000:31:00.1: cvl_0_1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:16.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:16.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:33:16.445 00:33:16.445 --- 10.0.0.2 ping statistics --- 00:33:16.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.445 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:16.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:16.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:33:16.445 00:33:16.445 --- 10.0.0.1 ping statistics --- 00:33:16.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:16.445 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1179211 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1179211 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1179211 ']' 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:16.445 13:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:16.445 [2024-12-05 13:37:38.683261] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:16.445 [2024-12-05 13:37:38.684423] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:33:16.445 [2024-12-05 13:37:38.684477] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:16.445 [2024-12-05 13:37:38.774582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.445 [2024-12-05 13:37:38.814565] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:16.445 [2024-12-05 13:37:38.814600] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:16.446 [2024-12-05 13:37:38.814608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:16.446 [2024-12-05 13:37:38.814615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:16.446 [2024-12-05 13:37:38.814621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:16.446 [2024-12-05 13:37:38.815210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.446 [2024-12-05 13:37:38.871670] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:16.446 [2024-12-05 13:37:38.871924] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:17.015 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:17.015 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:33:17.015 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:17.015 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.015 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:17.015 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:17.015 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:17.276 [2024-12-05 13:37:39.667980] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:17.276 ************************************ 00:33:17.276 START TEST lvs_grow_clean 00:33:17.276 ************************************ 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:17.276 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:17.536 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:17.536 13:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:17.796 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:17.796 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:17.796 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:17.796 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:17.796 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:17.796 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 lvol 150 00:33:18.055 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=b98c7421-b9b5-44b4-889b-2d500a851560 00:33:18.055 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:18.055 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:18.314 [2024-12-05 13:37:40.643645] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:18.314 [2024-12-05 13:37:40.643786] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:18.314 true 00:33:18.314 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:18.314 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:18.314 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:18.314 13:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:18.573 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b98c7421-b9b5-44b4-889b-2d500a851560 00:33:18.833 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:18.833 [2024-12-05 13:37:41.328251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:18.833 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1179783 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1179783 /var/tmp/bdevperf.sock 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1179783 ']' 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:19.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:19.093 13:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:19.093 [2024-12-05 13:37:41.568371] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:33:19.093 [2024-12-05 13:37:41.568435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179783 ] 00:33:19.353 [2024-12-05 13:37:41.666766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.353 [2024-12-05 13:37:41.709807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.921 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:19.921 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:33:19.921 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:20.180 Nvme0n1 00:33:20.180 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:20.440 [ 00:33:20.440 { 00:33:20.440 "name": "Nvme0n1", 00:33:20.440 "aliases": [ 00:33:20.440 "b98c7421-b9b5-44b4-889b-2d500a851560" 00:33:20.440 ], 00:33:20.440 "product_name": "NVMe disk", 00:33:20.440 "block_size": 4096, 00:33:20.440 "num_blocks": 38912, 00:33:20.440 "uuid": "b98c7421-b9b5-44b4-889b-2d500a851560", 00:33:20.440 "numa_id": 0, 00:33:20.440 "assigned_rate_limits": { 00:33:20.440 "rw_ios_per_sec": 0, 00:33:20.440 "rw_mbytes_per_sec": 0, 00:33:20.440 "r_mbytes_per_sec": 0, 00:33:20.440 "w_mbytes_per_sec": 0 00:33:20.440 }, 00:33:20.440 "claimed": false, 00:33:20.440 "zoned": false, 00:33:20.440 "supported_io_types": { 00:33:20.440 "read": true, 00:33:20.440 "write": true, 00:33:20.440 "unmap": true, 00:33:20.440 "flush": true, 00:33:20.440 "reset": true, 00:33:20.440 "nvme_admin": true, 00:33:20.440 "nvme_io": true, 00:33:20.440 "nvme_io_md": false, 00:33:20.440 "write_zeroes": true, 00:33:20.440 "zcopy": false, 00:33:20.440 "get_zone_info": false, 00:33:20.440 "zone_management": false, 00:33:20.440 "zone_append": false, 00:33:20.440 "compare": true, 00:33:20.440 "compare_and_write": true, 00:33:20.440 "abort": true, 00:33:20.440 "seek_hole": false, 00:33:20.440 "seek_data": false, 00:33:20.440 "copy": true, 00:33:20.440 "nvme_iov_md": false 00:33:20.440 }, 00:33:20.440 "memory_domains": [ 00:33:20.440 { 00:33:20.440 "dma_device_id": "system", 00:33:20.440 "dma_device_type": 1 00:33:20.440 } 00:33:20.440 ], 00:33:20.440 "driver_specific": { 00:33:20.440 "nvme": [ 00:33:20.440 { 00:33:20.440 "trid": { 00:33:20.440 "trtype": "TCP", 00:33:20.440 "adrfam": "IPv4", 00:33:20.440 "traddr": "10.0.0.2", 00:33:20.440 "trsvcid": "4420", 00:33:20.440 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:20.440 }, 00:33:20.440 "ctrlr_data": { 00:33:20.440 "cntlid": 1, 00:33:20.440 "vendor_id": "0x8086", 00:33:20.440 "model_number": "SPDK bdev Controller", 00:33:20.440 "serial_number": "SPDK0", 00:33:20.440 "firmware_revision": "25.01", 00:33:20.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.440 "oacs": { 00:33:20.440 "security": 0, 00:33:20.440 "format": 0, 00:33:20.440 "firmware": 0, 00:33:20.440 "ns_manage": 0 00:33:20.440 }, 00:33:20.440 "multi_ctrlr": true, 00:33:20.440 "ana_reporting": false 00:33:20.440 }, 00:33:20.440 "vs": { 00:33:20.440 "nvme_version": "1.3" 00:33:20.440 }, 00:33:20.440 "ns_data": { 00:33:20.440 "id": 1, 00:33:20.440 "can_share": true 00:33:20.440 } 00:33:20.440 } 00:33:20.440 ], 00:33:20.440 "mp_policy": "active_passive" 00:33:20.440 } 00:33:20.440 } 00:33:20.440 ] 00:33:20.440 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1179935 00:33:20.440 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:20.440 13:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:20.440 Running I/O for 10 seconds... 00:33:21.379 Latency(us) 00:33:21.379 [2024-12-05T12:37:43.947Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:21.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.379 Nvme0n1 : 1.00 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:33:21.379 [2024-12-05T12:37:43.947Z] =================================================================================================================== 00:33:21.379 [2024-12-05T12:37:43.947Z] Total : 17847.00 69.71 0.00 0.00 0.00 0.00 0.00 00:33:21.379 00:33:22.317 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:22.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.317 Nvme0n1 : 2.00 17941.50 70.08 0.00 0.00 0.00 0.00 0.00 00:33:22.317 [2024-12-05T12:37:44.885Z] =================================================================================================================== 00:33:22.317 [2024-12-05T12:37:44.885Z] Total : 17941.50 70.08 0.00 0.00 0.00 0.00 0.00 00:33:22.317 00:33:22.576 true 00:33:22.576 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:22.576 13:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:22.835 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:22.836 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:22.836 13:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1179935 00:33:23.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:23.403 Nvme0n1 : 3.00 17993.67 70.29 0.00 0.00 0.00 0.00 0.00 00:33:23.403 [2024-12-05T12:37:45.971Z] =================================================================================================================== 00:33:23.403 [2024-12-05T12:37:45.971Z] Total : 17993.67 70.29 0.00 0.00 0.00 0.00 0.00 00:33:23.403 00:33:24.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:24.341 Nvme0n1 : 4.00 18031.75 70.44 0.00 0.00 0.00 0.00 0.00 00:33:24.341 [2024-12-05T12:37:46.909Z] =================================================================================================================== 00:33:24.341 [2024-12-05T12:37:46.909Z] Total : 18031.75 70.44 0.00 0.00 0.00 0.00 0.00 00:33:24.341 00:33:25.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:25.719 Nvme0n1 : 5.00 18057.60 70.54 0.00 0.00 0.00 0.00 0.00 00:33:25.719 [2024-12-05T12:37:48.287Z] =================================================================================================================== 00:33:25.719 [2024-12-05T12:37:48.287Z] Total : 18057.60 70.54 0.00 0.00 0.00 0.00 0.00 00:33:25.719 00:33:26.658 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:26.658 Nvme0n1 : 6.00 18077.67 70.62 0.00 0.00 0.00 0.00 0.00 00:33:26.658 [2024-12-05T12:37:49.226Z] =================================================================================================================== 00:33:26.658 [2024-12-05T12:37:49.226Z] Total : 18077.67 70.62 0.00 0.00 0.00 0.00 0.00 00:33:26.658 00:33:27.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:27.597 Nvme0n1 : 7.00 18107.71 70.73 0.00 0.00 0.00 0.00 0.00 00:33:27.597 [2024-12-05T12:37:50.165Z] =================================================================================================================== 00:33:27.597 [2024-12-05T12:37:50.165Z] Total : 18107.71 70.73 0.00 0.00 0.00 0.00 0.00 00:33:27.597 00:33:28.538 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:28.538 Nvme0n1 : 8.00 18098.50 70.70 0.00 0.00 0.00 0.00 0.00 00:33:28.538 [2024-12-05T12:37:51.106Z] =================================================================================================================== 00:33:28.538 [2024-12-05T12:37:51.106Z] Total : 18098.50 70.70 0.00 0.00 0.00 0.00 0.00 00:33:28.538 00:33:29.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.479 Nvme0n1 : 9.00 18114.44 70.76 0.00 0.00 0.00 0.00 0.00 00:33:29.479 [2024-12-05T12:37:52.047Z] =================================================================================================================== 00:33:29.479 [2024-12-05T12:37:52.047Z] Total : 18114.44 70.76 0.00 0.00 0.00 0.00 0.00 00:33:29.479 00:33:30.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.419 Nvme0n1 : 10.00 18123.70 70.80 0.00 0.00 0.00 0.00 0.00 00:33:30.419 [2024-12-05T12:37:52.987Z] =================================================================================================================== 00:33:30.419 [2024-12-05T12:37:52.987Z] Total : 18123.70 70.80 0.00 0.00 0.00 0.00 0.00 00:33:30.419 00:33:30.419 00:33:30.419 Latency(us) 00:33:30.419 [2024-12-05T12:37:52.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:30.419 Nvme0n1 : 10.01 18124.94 70.80 0.00 0.00 7059.91 2375.68 17148.59 00:33:30.419 [2024-12-05T12:37:52.987Z] =================================================================================================================== 00:33:30.419 [2024-12-05T12:37:52.987Z] Total : 18124.94 70.80 0.00 0.00 7059.91 2375.68 17148.59 00:33:30.419 { 00:33:30.419 "results": [ 00:33:30.419 { 00:33:30.419 "job": "Nvme0n1", 00:33:30.419 "core_mask": "0x2", 00:33:30.419 "workload": "randwrite", 00:33:30.419 "status": "finished", 00:33:30.419 "queue_depth": 128, 00:33:30.419 "io_size": 4096, 00:33:30.419 "runtime": 10.00638, 00:33:30.419 "iops": 18124.93629064657, 00:33:30.419 "mibps": 70.80053238533816, 00:33:30.419 "io_failed": 0, 00:33:30.419 "io_timeout": 0, 00:33:30.419 "avg_latency_us": 7059.90793049008, 00:33:30.419 "min_latency_us": 2375.68, 00:33:30.419 "max_latency_us": 17148.586666666666 00:33:30.419 } 00:33:30.419 ], 00:33:30.419 "core_count": 1 00:33:30.419 } 00:33:30.419 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1179783 00:33:30.419 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1179783 ']' 00:33:30.419 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1179783 00:33:30.419 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:30.419 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.419 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179783 00:33:30.679 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:30.679 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:30.679 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179783' 00:33:30.679 killing process with pid 1179783 00:33:30.679 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1179783 00:33:30.679 Received shutdown signal, test time was about 10.000000 seconds 00:33:30.679 00:33:30.679 Latency(us) 00:33:30.679 [2024-12-05T12:37:53.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.679 [2024-12-05T12:37:53.247Z] =================================================================================================================== 00:33:30.679 [2024-12-05T12:37:53.247Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:30.679 13:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1179783 00:33:30.679 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:30.939 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:30.939 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:30.939 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:31.199 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:31.199 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:31.199 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:31.458 [2024-12-05 13:37:53.771722] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:31.458 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:31.459 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:31.459 request: 00:33:31.459 { 00:33:31.459 "uuid": "74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9", 00:33:31.459 "method": "bdev_lvol_get_lvstores", 00:33:31.459 "req_id": 1 00:33:31.459 } 00:33:31.459 Got JSON-RPC error response 00:33:31.459 response: 00:33:31.459 { 00:33:31.459 "code": -19, 00:33:31.459 "message": "No such device" 00:33:31.459 } 00:33:31.459 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:31.459 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:31.459 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:31.459 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:31.459 13:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:31.719 aio_bdev 00:33:31.719 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b98c7421-b9b5-44b4-889b-2d500a851560 00:33:31.719 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=b98c7421-b9b5-44b4-889b-2d500a851560 00:33:31.719 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:31.719 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:31.719 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:31.719 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:31.719 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:31.981 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b98c7421-b9b5-44b4-889b-2d500a851560 -t 2000 00:33:31.981 [ 00:33:31.981 { 00:33:31.981 "name": "b98c7421-b9b5-44b4-889b-2d500a851560", 00:33:31.981 "aliases": [ 00:33:31.981 "lvs/lvol" 00:33:31.981 ], 00:33:31.981 "product_name": "Logical Volume", 00:33:31.981 "block_size": 4096, 00:33:31.981 "num_blocks": 38912, 00:33:31.981 "uuid": "b98c7421-b9b5-44b4-889b-2d500a851560", 00:33:31.981 "assigned_rate_limits": { 00:33:31.981 "rw_ios_per_sec": 0, 00:33:31.981 "rw_mbytes_per_sec": 0, 00:33:31.981 "r_mbytes_per_sec": 0, 00:33:31.981 "w_mbytes_per_sec": 0 00:33:31.981 }, 00:33:31.981 "claimed": false, 00:33:31.981 "zoned": false, 00:33:31.981 "supported_io_types": { 00:33:31.981 "read": true, 00:33:31.981 "write": true, 00:33:31.981 "unmap": true, 00:33:31.981 "flush": false, 00:33:31.981 "reset": true, 00:33:31.981 "nvme_admin": false, 00:33:31.981 "nvme_io": false, 00:33:31.981 "nvme_io_md": false, 00:33:31.981 "write_zeroes": true, 00:33:31.981 "zcopy": false, 00:33:31.981 "get_zone_info": false, 00:33:31.981 "zone_management": false, 00:33:31.981 "zone_append": false, 00:33:31.981 "compare": false, 00:33:31.981 "compare_and_write": false, 00:33:31.981 "abort": false, 00:33:31.981 "seek_hole": true, 00:33:31.981 "seek_data": true, 00:33:31.981 "copy": false, 00:33:31.981 "nvme_iov_md": false 00:33:31.981 }, 00:33:31.981 "driver_specific": { 00:33:31.981 "lvol": { 00:33:31.981 "lvol_store_uuid": "74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9", 00:33:31.981 "base_bdev": "aio_bdev", 00:33:31.981 "thin_provision": false, 00:33:31.981 "num_allocated_clusters": 38, 00:33:31.981 "snapshot": false, 00:33:31.981 "clone": false, 00:33:31.981 "esnap_clone": false 00:33:31.981 } 00:33:31.981 } 00:33:31.981 } 00:33:31.981 ] 00:33:31.981 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:31.981 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:31.981 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:32.242 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:32.242 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:32.242 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:32.502 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:32.502 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b98c7421-b9b5-44b4-889b-2d500a851560 00:33:32.502 13:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 74400af6-b4a1-4009-bc9c-2f6aa3fd6eb9 00:33:32.761 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:32.761 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:33.021 00:33:33.021 real 0m15.610s 00:33:33.021 user 0m15.320s 00:33:33.021 sys 0m1.381s 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:33.021 ************************************ 00:33:33.021 END TEST lvs_grow_clean 00:33:33.021 ************************************ 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:33.021 ************************************ 00:33:33.021 START TEST lvs_grow_dirty 00:33:33.021 ************************************ 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:33.021 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:33.281 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:33.281 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:33.281 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b9517f87-94f4-451d-9b9d-200238d4f449 00:33:33.281 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:33.281 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:33.541 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:33.541 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:33.541 13:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b9517f87-94f4-451d-9b9d-200238d4f449 lvol 150 00:33:33.801 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8ddf6acb-5387-419d-818d-54416ae69e47 00:33:33.801 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:33.801 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:33.801 [2024-12-05 13:37:56.319653] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:33.801 [2024-12-05 13:37:56.319797] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:33.801 true 00:33:33.801 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:33.801 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:34.062 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:34.062 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:34.322 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8ddf6acb-5387-419d-818d-54416ae69e47 00:33:34.322 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.582 [2024-12-05 13:37:56.951786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.582 13:37:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1182678 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1182678 /var/tmp/bdevperf.sock 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1182678 ']' 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:34.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.582 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:34.842 [2024-12-05 13:37:57.183911] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:33:34.843 [2024-12-05 13:37:57.183966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182678 ] 00:33:34.843 [2024-12-05 13:37:57.274483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.843 [2024-12-05 13:37:57.304495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.413 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.413 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:35.413 13:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:35.672 Nvme0n1 00:33:35.933 13:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:35.933 [ 00:33:35.933 { 00:33:35.933 "name": "Nvme0n1", 00:33:35.933 "aliases": [ 00:33:35.933 "8ddf6acb-5387-419d-818d-54416ae69e47" 00:33:35.933 ], 00:33:35.933 "product_name": "NVMe disk", 00:33:35.933 "block_size": 4096, 00:33:35.933 "num_blocks": 38912, 00:33:35.933 "uuid": "8ddf6acb-5387-419d-818d-54416ae69e47", 00:33:35.933 "numa_id": 0, 00:33:35.933 "assigned_rate_limits": { 00:33:35.933 "rw_ios_per_sec": 0, 00:33:35.933 "rw_mbytes_per_sec": 0, 00:33:35.933 "r_mbytes_per_sec": 0, 00:33:35.933 "w_mbytes_per_sec": 0 00:33:35.933 }, 00:33:35.933 "claimed": false, 00:33:35.933 "zoned": false, 00:33:35.933 "supported_io_types": { 00:33:35.933 "read": true, 00:33:35.933 "write": true, 00:33:35.933 "unmap": true, 00:33:35.933 "flush": true, 00:33:35.933 "reset": true, 00:33:35.933 "nvme_admin": true, 00:33:35.933 "nvme_io": true, 00:33:35.933 "nvme_io_md": false, 00:33:35.933 "write_zeroes": true, 00:33:35.933 "zcopy": false, 00:33:35.933 "get_zone_info": false, 00:33:35.933 "zone_management": false, 00:33:35.933 "zone_append": false, 00:33:35.933 "compare": true, 00:33:35.933 "compare_and_write": true, 00:33:35.933 "abort": true, 00:33:35.933 "seek_hole": false, 00:33:35.933 "seek_data": false, 00:33:35.933 "copy": true, 00:33:35.933 "nvme_iov_md": false 00:33:35.933 }, 00:33:35.933 "memory_domains": [ 00:33:35.933 { 00:33:35.933 "dma_device_id": "system", 00:33:35.933 "dma_device_type": 1 00:33:35.933 } 00:33:35.933 ], 00:33:35.933 "driver_specific": { 00:33:35.933 "nvme": [ 00:33:35.933 { 00:33:35.933 "trid": { 00:33:35.933 "trtype": "TCP", 00:33:35.933 "adrfam": "IPv4", 00:33:35.933 "traddr": "10.0.0.2", 00:33:35.933 "trsvcid": "4420", 00:33:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:35.933 }, 00:33:35.933 "ctrlr_data": { 00:33:35.933 "cntlid": 1, 00:33:35.933 "vendor_id": "0x8086", 00:33:35.933 "model_number": "SPDK bdev Controller", 00:33:35.933 "serial_number": "SPDK0", 00:33:35.933 "firmware_revision": "25.01", 00:33:35.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:35.933 "oacs": { 00:33:35.933 "security": 0, 00:33:35.933 "format": 0, 00:33:35.933 "firmware": 0, 00:33:35.933 "ns_manage": 0 00:33:35.933 }, 00:33:35.933 "multi_ctrlr": true, 00:33:35.933 "ana_reporting": false 00:33:35.933 }, 00:33:35.933 "vs": { 00:33:35.933 "nvme_version": "1.3" 00:33:35.933 }, 00:33:35.933 "ns_data": { 00:33:35.933 "id": 1, 00:33:35.933 "can_share": true 00:33:35.933 } 00:33:35.933 } 00:33:35.933 ], 00:33:35.933 "mp_policy": "active_passive" 00:33:35.933 } 00:33:35.933 } 00:33:35.933 ] 00:33:35.933 13:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1183005 00:33:35.933 13:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:35.933 13:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:36.194 Running I/O for 10 seconds... 00:33:37.135 Latency(us) 00:33:37.136 [2024-12-05T12:37:59.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:37.136 Nvme0n1 : 1.00 17917.00 69.99 0.00 0.00 0.00 0.00 0.00 00:33:37.136 [2024-12-05T12:37:59.704Z] =================================================================================================================== 00:33:37.136 [2024-12-05T12:37:59.704Z] Total : 17917.00 69.99 0.00 0.00 0.00 0.00 0.00 00:33:37.136 00:33:38.074 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:38.074 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:38.074 Nvme0n1 : 2.00 17975.50 70.22 0.00 0.00 0.00 0.00 0.00 00:33:38.074 [2024-12-05T12:38:00.642Z] =================================================================================================================== 00:33:38.074 [2024-12-05T12:38:00.642Z] Total : 17975.50 70.22 0.00 0.00 0.00 0.00 0.00 00:33:38.074 00:33:38.074 true 00:33:38.074 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:38.074 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:38.335 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:38.335 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:38.335 13:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1183005 00:33:39.283 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.283 Nvme0n1 : 3.00 17995.00 70.29 0.00 0.00 0.00 0.00 0.00 00:33:39.283 [2024-12-05T12:38:01.851Z] =================================================================================================================== 00:33:39.283 [2024-12-05T12:38:01.851Z] Total : 17995.00 70.29 0.00 0.00 0.00 0.00 0.00 00:33:39.283 00:33:40.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:40.223 Nvme0n1 : 4.00 18036.50 70.46 0.00 0.00 0.00 0.00 0.00 00:33:40.223 [2024-12-05T12:38:02.791Z] =================================================================================================================== 00:33:40.223 [2024-12-05T12:38:02.791Z] Total : 18036.50 70.46 0.00 0.00 0.00 0.00 0.00 00:33:40.223 00:33:41.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:41.165 Nvme0n1 : 5.00 18061.40 70.55 0.00 0.00 0.00 0.00 0.00 00:33:41.165 [2024-12-05T12:38:03.733Z] =================================================================================================================== 00:33:41.165 [2024-12-05T12:38:03.733Z] Total : 18061.40 70.55 0.00 0.00 0.00 0.00 0.00 00:33:41.165 00:33:42.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:42.105 Nvme0n1 : 6.00 18078.00 70.62 0.00 0.00 0.00 0.00 0.00 00:33:42.105 [2024-12-05T12:38:04.673Z] =================================================================================================================== 00:33:42.105 [2024-12-05T12:38:04.673Z] Total : 18078.00 70.62 0.00 0.00 0.00 0.00 0.00 00:33:42.105 00:33:43.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.046 Nvme0n1 : 7.00 18089.86 70.66 0.00 0.00 0.00 0.00 0.00 00:33:43.046 [2024-12-05T12:38:05.614Z] =================================================================================================================== 00:33:43.046 [2024-12-05T12:38:05.614Z] Total : 18089.86 70.66 0.00 0.00 0.00 0.00 0.00 00:33:43.046 00:33:43.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:43.985 Nvme0n1 : 8.00 18098.75 70.70 0.00 0.00 0.00 0.00 0.00 00:33:43.985 [2024-12-05T12:38:06.553Z] =================================================================================================================== 00:33:43.985 [2024-12-05T12:38:06.553Z] Total : 18098.75 70.70 0.00 0.00 0.00 0.00 0.00 00:33:43.985 00:33:45.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:45.370 Nvme0n1 : 9.00 18119.78 70.78 0.00 0.00 0.00 0.00 0.00 00:33:45.370 [2024-12-05T12:38:07.938Z] =================================================================================================================== 00:33:45.370 [2024-12-05T12:38:07.938Z] Total : 18119.78 70.78 0.00 0.00 0.00 0.00 0.00 00:33:45.370 00:33:46.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:46.313 Nvme0n1 : 10.00 18123.90 70.80 0.00 0.00 0.00 0.00 0.00 00:33:46.313 [2024-12-05T12:38:08.881Z] =================================================================================================================== 00:33:46.313 [2024-12-05T12:38:08.881Z] Total : 18123.90 70.80 0.00 0.00 0.00 0.00 0.00 00:33:46.313 00:33:46.313 00:33:46.313 Latency(us) 00:33:46.313 [2024-12-05T12:38:08.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.313 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:46.313 Nvme0n1 : 10.00 18128.41 70.81 0.00 0.00 7059.15 2116.27 13161.81 00:33:46.313 [2024-12-05T12:38:08.881Z] =================================================================================================================== 00:33:46.313 [2024-12-05T12:38:08.881Z] Total : 18128.41 70.81 0.00 0.00 7059.15 2116.27 13161.81 00:33:46.313 { 00:33:46.313 "results": [ 00:33:46.313 { 00:33:46.313 "job": "Nvme0n1", 00:33:46.313 "core_mask": "0x2", 00:33:46.313 "workload": "randwrite", 00:33:46.313 "status": "finished", 00:33:46.313 "queue_depth": 128, 00:33:46.313 "io_size": 4096, 00:33:46.313 "runtime": 10.004573, 00:33:46.313 "iops": 18128.409878162714, 00:33:46.313 "mibps": 70.8141010865731, 00:33:46.313 "io_failed": 0, 00:33:46.313 "io_timeout": 0, 00:33:46.313 "avg_latency_us": 7059.145593630594, 00:33:46.313 "min_latency_us": 2116.266666666667, 00:33:46.313 "max_latency_us": 13161.813333333334 00:33:46.313 } 00:33:46.313 ], 00:33:46.313 "core_count": 1 00:33:46.313 } 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1182678 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1182678 ']' 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1182678 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1182678 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1182678' 00:33:46.313 killing process with pid 1182678 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1182678 00:33:46.313 Received shutdown signal, test time was about 10.000000 seconds 00:33:46.313 00:33:46.313 Latency(us) 00:33:46.313 [2024-12-05T12:38:08.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.313 [2024-12-05T12:38:08.881Z] =================================================================================================================== 00:33:46.313 [2024-12-05T12:38:08.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1182678 00:33:46.313 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:46.575 13:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:46.575 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:46.575 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1179211 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1179211 00:33:46.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1179211 Killed "${NVMF_APP[@]}" "$@" 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1185018 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1185018 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1185018 ']' 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:46.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.836 13:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:46.836 [2024-12-05 13:38:09.330024] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:46.836 [2024-12-05 13:38:09.331046] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:33:46.836 [2024-12-05 13:38:09.331092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.096 [2024-12-05 13:38:09.424431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.096 [2024-12-05 13:38:09.461027] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.096 [2024-12-05 13:38:09.461063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.096 [2024-12-05 13:38:09.461073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.096 [2024-12-05 13:38:09.461081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.096 [2024-12-05 13:38:09.461088] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.096 [2024-12-05 13:38:09.461635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:47.096 [2024-12-05 13:38:09.517110] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:47.096 [2024-12-05 13:38:09.517346] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:47.667 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.667 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:47.667 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:47.667 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:47.667 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:47.667 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.667 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:47.928 [2024-12-05 13:38:10.333293] blobstore.c:4896:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:47.928 [2024-12-05 13:38:10.333431] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:47.928 [2024-12-05 13:38:10.333464] blobstore.c:4843:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8ddf6acb-5387-419d-818d-54416ae69e47 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8ddf6acb-5387-419d-818d-54416ae69e47 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:47.928 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:48.211 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ddf6acb-5387-419d-818d-54416ae69e47 -t 2000 00:33:48.211 [ 00:33:48.211 { 00:33:48.211 "name": "8ddf6acb-5387-419d-818d-54416ae69e47", 00:33:48.211 "aliases": [ 00:33:48.211 "lvs/lvol" 00:33:48.211 ], 00:33:48.211 "product_name": "Logical Volume", 00:33:48.211 "block_size": 4096, 00:33:48.211 "num_blocks": 38912, 00:33:48.211 "uuid": "8ddf6acb-5387-419d-818d-54416ae69e47", 00:33:48.211 "assigned_rate_limits": { 00:33:48.211 "rw_ios_per_sec": 0, 00:33:48.211 "rw_mbytes_per_sec": 0, 00:33:48.211 "r_mbytes_per_sec": 0, 00:33:48.211 "w_mbytes_per_sec": 0 00:33:48.211 }, 00:33:48.211 "claimed": false, 00:33:48.211 "zoned": false, 00:33:48.211 "supported_io_types": { 00:33:48.211 "read": true, 00:33:48.211 "write": true, 00:33:48.211 "unmap": true, 00:33:48.211 "flush": false, 00:33:48.211 "reset": true, 00:33:48.211 "nvme_admin": false, 00:33:48.211 "nvme_io": false, 00:33:48.211 "nvme_io_md": false, 00:33:48.211 "write_zeroes": true, 00:33:48.211 "zcopy": false, 00:33:48.211 "get_zone_info": false, 00:33:48.211 "zone_management": false, 00:33:48.211 "zone_append": false, 00:33:48.211 "compare": false, 00:33:48.211 "compare_and_write": false, 00:33:48.211 "abort": false, 00:33:48.211 "seek_hole": true, 00:33:48.211 "seek_data": true, 00:33:48.211 "copy": false, 00:33:48.211 "nvme_iov_md": false 00:33:48.211 }, 00:33:48.211 "driver_specific": { 00:33:48.211 "lvol": { 00:33:48.211 "lvol_store_uuid": "b9517f87-94f4-451d-9b9d-200238d4f449", 00:33:48.211 "base_bdev": "aio_bdev", 00:33:48.211 "thin_provision": false, 00:33:48.211 "num_allocated_clusters": 38, 00:33:48.211 "snapshot": false, 00:33:48.211 "clone": false, 00:33:48.211 "esnap_clone": false 00:33:48.211 } 00:33:48.211 } 00:33:48.211 } 00:33:48.211 ] 00:33:48.211 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:48.211 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:48.211 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:48.471 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:48.471 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:48.471 13:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:48.731 [2024-12-05 13:38:11.238059] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:48.731 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:48.990 request: 00:33:48.990 { 00:33:48.990 "uuid": "b9517f87-94f4-451d-9b9d-200238d4f449", 00:33:48.990 "method": "bdev_lvol_get_lvstores", 00:33:48.990 "req_id": 1 00:33:48.990 } 00:33:48.990 Got JSON-RPC error response 00:33:48.990 response: 00:33:48.990 { 00:33:48.990 "code": -19, 00:33:48.991 "message": "No such device" 00:33:48.991 } 00:33:48.991 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:48.991 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:48.991 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:48.991 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:48.991 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:49.250 aio_bdev 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8ddf6acb-5387-419d-818d-54416ae69e47 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8ddf6acb-5387-419d-818d-54416ae69e47 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:49.250 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8ddf6acb-5387-419d-818d-54416ae69e47 -t 2000 00:33:49.510 [ 00:33:49.510 { 00:33:49.510 "name": "8ddf6acb-5387-419d-818d-54416ae69e47", 00:33:49.510 "aliases": [ 00:33:49.510 "lvs/lvol" 00:33:49.510 ], 00:33:49.510 "product_name": "Logical Volume", 00:33:49.510 "block_size": 4096, 00:33:49.510 "num_blocks": 38912, 00:33:49.510 "uuid": "8ddf6acb-5387-419d-818d-54416ae69e47", 00:33:49.510 "assigned_rate_limits": { 00:33:49.510 "rw_ios_per_sec": 0, 00:33:49.510 "rw_mbytes_per_sec": 0, 00:33:49.510 "r_mbytes_per_sec": 0, 00:33:49.510 "w_mbytes_per_sec": 0 00:33:49.510 }, 00:33:49.510 "claimed": false, 00:33:49.510 "zoned": false, 00:33:49.510 "supported_io_types": { 00:33:49.510 "read": true, 00:33:49.510 "write": true, 00:33:49.510 "unmap": true, 00:33:49.510 "flush": false, 00:33:49.510 "reset": true, 00:33:49.510 "nvme_admin": false, 00:33:49.510 "nvme_io": false, 00:33:49.510 "nvme_io_md": false, 00:33:49.510 "write_zeroes": true, 00:33:49.510 "zcopy": false, 00:33:49.510 "get_zone_info": false, 00:33:49.510 "zone_management": false, 00:33:49.510 "zone_append": false, 00:33:49.510 "compare": false, 00:33:49.510 "compare_and_write": false, 00:33:49.510 "abort": false, 00:33:49.510 "seek_hole": true, 00:33:49.510 "seek_data": true, 00:33:49.510 "copy": false, 00:33:49.510 "nvme_iov_md": false 00:33:49.510 }, 00:33:49.510 "driver_specific": { 00:33:49.510 "lvol": { 00:33:49.510 "lvol_store_uuid": "b9517f87-94f4-451d-9b9d-200238d4f449", 00:33:49.510 "base_bdev": "aio_bdev", 00:33:49.510 "thin_provision": false, 00:33:49.510 "num_allocated_clusters": 38, 00:33:49.510 "snapshot": false, 00:33:49.510 "clone": false, 00:33:49.510 "esnap_clone": false 00:33:49.510 } 00:33:49.510 } 00:33:49.510 } 00:33:49.510 ] 00:33:49.510 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:49.510 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:49.510 13:38:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:49.770 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:49.770 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:49.770 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:49.770 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:49.770 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8ddf6acb-5387-419d-818d-54416ae69e47 00:33:50.029 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b9517f87-94f4-451d-9b9d-200238d4f449 00:33:50.296 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:50.296 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:50.555 00:33:50.555 real 0m17.457s 00:33:50.555 user 0m35.351s 00:33:50.555 sys 0m3.021s 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:50.555 ************************************ 00:33:50.555 END TEST lvs_grow_dirty 00:33:50.555 ************************************ 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:50.555 nvmf_trace.0 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:50.555 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:50.556 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:50.556 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:50.556 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:50.556 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:50.556 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:50.556 13:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:50.556 rmmod nvme_tcp 00:33:50.556 rmmod nvme_fabrics 00:33:50.556 rmmod nvme_keyring 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1185018 ']' 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1185018 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1185018 ']' 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1185018 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1185018 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1185018' 00:33:50.556 killing process with pid 1185018 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1185018 00:33:50.556 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1185018 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:50.816 13:38:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:53.365 00:33:53.365 real 0m45.183s 00:33:53.365 user 0m53.675s 00:33:53.365 sys 0m11.211s 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:53.365 ************************************ 00:33:53.365 END TEST nvmf_lvs_grow 00:33:53.365 ************************************ 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:53.365 ************************************ 00:33:53.365 START TEST nvmf_bdev_io_wait 00:33:53.365 ************************************ 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:53.365 * Looking for test storage... 00:33:53.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:53.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.365 --rc genhtml_branch_coverage=1 00:33:53.365 --rc genhtml_function_coverage=1 00:33:53.365 --rc genhtml_legend=1 00:33:53.365 --rc geninfo_all_blocks=1 00:33:53.365 --rc geninfo_unexecuted_blocks=1 00:33:53.365 00:33:53.365 ' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:53.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.365 --rc genhtml_branch_coverage=1 00:33:53.365 --rc genhtml_function_coverage=1 00:33:53.365 --rc genhtml_legend=1 00:33:53.365 --rc geninfo_all_blocks=1 00:33:53.365 --rc geninfo_unexecuted_blocks=1 00:33:53.365 00:33:53.365 ' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:53.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.365 --rc genhtml_branch_coverage=1 00:33:53.365 --rc genhtml_function_coverage=1 00:33:53.365 --rc genhtml_legend=1 00:33:53.365 --rc geninfo_all_blocks=1 00:33:53.365 --rc geninfo_unexecuted_blocks=1 00:33:53.365 00:33:53.365 ' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:53.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:53.365 --rc genhtml_branch_coverage=1 00:33:53.365 --rc genhtml_function_coverage=1 00:33:53.365 --rc genhtml_legend=1 00:33:53.365 --rc geninfo_all_blocks=1 00:33:53.365 --rc geninfo_unexecuted_blocks=1 00:33:53.365 00:33:53.365 ' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:53.365 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:53.366 13:38:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.502 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.502 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:34:01.502 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:01.502 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:01.502 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:01.502 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:01.503 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:01.503 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:01.503 Found net devices under 0000:31:00.0: cvl_0_0 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:01.503 Found net devices under 0000:31:00.1: cvl_0_1 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.503 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:01.504 13:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.504 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.504 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.504 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:01.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.694 ms 00:34:01.764 00:34:01.764 --- 10.0.0.2 ping statistics --- 00:34:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.764 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:34:01.764 00:34:01.764 --- 10.0.0.1 ping statistics --- 00:34:01.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.764 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.764 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1190455 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1190455 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1190455 ']' 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.765 13:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:01.765 [2024-12-05 13:38:24.200407] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:01.765 [2024-12-05 13:38:24.201545] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:01.765 [2024-12-05 13:38:24.201599] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.765 [2024-12-05 13:38:24.293703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:02.026 [2024-12-05 13:38:24.336496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:02.026 [2024-12-05 13:38:24.336534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:02.026 [2024-12-05 13:38:24.336543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:02.026 [2024-12-05 13:38:24.336549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:02.026 [2024-12-05 13:38:24.336555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:02.026 [2024-12-05 13:38:24.338422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:02.026 [2024-12-05 13:38:24.338543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:02.026 [2024-12-05 13:38:24.338699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.026 [2024-12-05 13:38:24.338699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:02.026 [2024-12-05 13:38:24.338980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.598 [2024-12-05 13:38:25.126664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:02.598 [2024-12-05 13:38:25.126792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:02.598 [2024-12-05 13:38:25.127433] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:02.598 [2024-12-05 13:38:25.127605] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.598 [2024-12-05 13:38:25.139490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.598 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.870 Malloc0 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:02.870 [2024-12-05 13:38:25.203419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1190788 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1190790 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.870 { 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme$subsystem", 00:34:02.870 "trtype": "$TEST_TRANSPORT", 00:34:02.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "$NVMF_PORT", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.870 "hdgst": ${hdgst:-false}, 00:34:02.870 "ddgst": ${ddgst:-false} 00:34:02.870 }, 00:34:02.870 "method": "bdev_nvme_attach_controller" 00:34:02.870 } 00:34:02.870 EOF 00:34:02.870 )") 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1190792 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.870 { 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme$subsystem", 00:34:02.870 "trtype": "$TEST_TRANSPORT", 00:34:02.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "$NVMF_PORT", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.870 "hdgst": ${hdgst:-false}, 00:34:02.870 "ddgst": ${ddgst:-false} 00:34:02.870 }, 00:34:02.870 "method": "bdev_nvme_attach_controller" 00:34:02.870 } 00:34:02.870 EOF 00:34:02.870 )") 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1190795 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.870 { 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme$subsystem", 00:34:02.870 "trtype": "$TEST_TRANSPORT", 00:34:02.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "$NVMF_PORT", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.870 "hdgst": ${hdgst:-false}, 00:34:02.870 "ddgst": ${ddgst:-false} 00:34:02.870 }, 00:34:02.870 "method": "bdev_nvme_attach_controller" 00:34:02.870 } 00:34:02.870 EOF 00:34:02.870 )") 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:02.870 { 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme$subsystem", 00:34:02.870 "trtype": "$TEST_TRANSPORT", 00:34:02.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "$NVMF_PORT", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.870 "hdgst": ${hdgst:-false}, 00:34:02.870 "ddgst": ${ddgst:-false} 00:34:02.870 }, 00:34:02.870 "method": "bdev_nvme_attach_controller" 00:34:02.870 } 00:34:02.870 EOF 00:34:02.870 )") 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1190788 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme1", 00:34:02.870 "trtype": "tcp", 00:34:02.870 "traddr": "10.0.0.2", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "4420", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.870 "hdgst": false, 00:34:02.870 "ddgst": false 00:34:02.870 }, 00:34:02.870 "method": "bdev_nvme_attach_controller" 00:34:02.870 }' 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme1", 00:34:02.870 "trtype": "tcp", 00:34:02.870 "traddr": "10.0.0.2", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "4420", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.870 "hdgst": false, 00:34:02.870 "ddgst": false 00:34:02.870 }, 00:34:02.870 "method": "bdev_nvme_attach_controller" 00:34:02.870 }' 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme1", 00:34:02.870 "trtype": "tcp", 00:34:02.870 "traddr": "10.0.0.2", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "4420", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.870 "hdgst": false, 00:34:02.870 "ddgst": false 00:34:02.870 }, 00:34:02.870 "method": "bdev_nvme_attach_controller" 00:34:02.870 }' 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:34:02.870 13:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:02.870 "params": { 00:34:02.870 "name": "Nvme1", 00:34:02.870 "trtype": "tcp", 00:34:02.870 "traddr": "10.0.0.2", 00:34:02.870 "adrfam": "ipv4", 00:34:02.870 "trsvcid": "4420", 00:34:02.870 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.870 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.870 "hdgst": false, 00:34:02.870 "ddgst": false 00:34:02.870 }, 00:34:02.871 "method": "bdev_nvme_attach_controller" 00:34:02.871 }' 00:34:02.871 [2024-12-05 13:38:25.259642] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:02.871 [2024-12-05 13:38:25.259699] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:34:02.871 [2024-12-05 13:38:25.259937] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:02.871 [2024-12-05 13:38:25.259986] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:34:02.871 [2024-12-05 13:38:25.260924] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:02.871 [2024-12-05 13:38:25.260971] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:34:02.871 [2024-12-05 13:38:25.264363] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:02.871 [2024-12-05 13:38:25.264415] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:34:02.871 [2024-12-05 13:38:25.428220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.208 [2024-12-05 13:38:25.457105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:03.208 [2024-12-05 13:38:25.485680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.208 [2024-12-05 13:38:25.513655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:03.208 [2024-12-05 13:38:25.568699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.208 [2024-12-05 13:38:25.597980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:03.208 [2024-12-05 13:38:25.602038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.208 [2024-12-05 13:38:25.629975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:03.208 Running I/O for 1 seconds... 00:34:03.208 Running I/O for 1 seconds... 00:34:03.495 Running I/O for 1 seconds... 00:34:03.495 Running I/O for 1 seconds... 00:34:04.364 12727.00 IOPS, 49.71 MiB/s 00:34:04.364 Latency(us) 00:34:04.364 [2024-12-05T12:38:26.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.364 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:34:04.364 Nvme1n1 : 1.01 12772.13 49.89 0.00 0.00 9987.57 4096.00 12124.16 00:34:04.364 [2024-12-05T12:38:26.932Z] =================================================================================================================== 00:34:04.364 [2024-12-05T12:38:26.932Z] Total : 12772.13 49.89 0.00 0.00 9987.57 4096.00 12124.16 00:34:04.364 12354.00 IOPS, 48.26 MiB/s 00:34:04.364 Latency(us) 00:34:04.364 [2024-12-05T12:38:26.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.364 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:34:04.364 Nvme1n1 : 1.01 12420.81 48.52 0.00 0.00 10269.85 2225.49 14745.60 00:34:04.364 [2024-12-05T12:38:26.932Z] =================================================================================================================== 00:34:04.364 [2024-12-05T12:38:26.932Z] Total : 12420.81 48.52 0.00 0.00 10269.85 2225.49 14745.60 00:34:04.364 13:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1190790 00:34:04.364 13:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1190792 00:34:04.364 20553.00 IOPS, 80.29 MiB/s 00:34:04.364 Latency(us) 00:34:04.364 [2024-12-05T12:38:26.932Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.364 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:34:04.364 Nvme1n1 : 1.01 20635.11 80.61 0.00 0.00 6190.58 2143.57 10977.28 00:34:04.364 [2024-12-05T12:38:26.932Z] =================================================================================================================== 00:34:04.364 [2024-12-05T12:38:26.933Z] Total : 20635.11 80.61 0.00 0.00 6190.58 2143.57 10977.28 00:34:04.365 177376.00 IOPS, 692.88 MiB/s 00:34:04.365 Latency(us) 00:34:04.365 [2024-12-05T12:38:26.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.365 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:34:04.365 Nvme1n1 : 1.00 177029.23 691.52 0.00 0.00 719.08 293.55 1966.08 00:34:04.365 [2024-12-05T12:38:26.933Z] =================================================================================================================== 00:34:04.365 [2024-12-05T12:38:26.933Z] Total : 177029.23 691.52 0.00 0.00 719.08 293.55 1966.08 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1190795 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.625 rmmod nvme_tcp 00:34:04.625 rmmod nvme_fabrics 00:34:04.625 rmmod nvme_keyring 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1190455 ']' 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1190455 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1190455 ']' 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1190455 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1190455 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1190455' 00:34:04.625 killing process with pid 1190455 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1190455 00:34:04.625 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1190455 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.885 13:38:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.794 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.794 00:34:06.794 real 0m13.933s 00:34:06.794 user 0m15.555s 00:34:06.794 sys 0m8.171s 00:34:06.794 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.794 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:34:06.794 ************************************ 00:34:06.794 END TEST nvmf_bdev_io_wait 00:34:06.794 ************************************ 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:07.054 ************************************ 00:34:07.054 START TEST nvmf_queue_depth 00:34:07.054 ************************************ 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:34:07.054 * Looking for test storage... 00:34:07.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.054 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.055 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.316 --rc genhtml_branch_coverage=1 00:34:07.316 --rc genhtml_function_coverage=1 00:34:07.316 --rc genhtml_legend=1 00:34:07.316 --rc geninfo_all_blocks=1 00:34:07.316 --rc geninfo_unexecuted_blocks=1 00:34:07.316 00:34:07.316 ' 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.316 --rc genhtml_branch_coverage=1 00:34:07.316 --rc genhtml_function_coverage=1 00:34:07.316 --rc genhtml_legend=1 00:34:07.316 --rc geninfo_all_blocks=1 00:34:07.316 --rc geninfo_unexecuted_blocks=1 00:34:07.316 00:34:07.316 ' 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.316 --rc genhtml_branch_coverage=1 00:34:07.316 --rc genhtml_function_coverage=1 00:34:07.316 --rc genhtml_legend=1 00:34:07.316 --rc geninfo_all_blocks=1 00:34:07.316 --rc geninfo_unexecuted_blocks=1 00:34:07.316 00:34:07.316 ' 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.316 --rc genhtml_branch_coverage=1 00:34:07.316 --rc genhtml_function_coverage=1 00:34:07.316 --rc genhtml_legend=1 00:34:07.316 --rc geninfo_all_blocks=1 00:34:07.316 --rc geninfo_unexecuted_blocks=1 00:34:07.316 00:34:07.316 ' 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:34:07.316 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:34:07.317 13:38:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:15.475 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:15.475 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:15.475 Found net devices under 0000:31:00.0: cvl_0_0 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.475 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:15.475 Found net devices under 0000:31:00.1: cvl_0_1 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:15.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:34:15.476 00:34:15.476 --- 10.0.0.2 ping statistics --- 00:34:15.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.476 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:34:15.476 00:34:15.476 --- 10.0.0.1 ping statistics --- 00:34:15.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.476 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:15.476 13:38:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:15.476 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:34:15.476 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:15.476 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:15.476 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1195836 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1195836 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1195836 ']' 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:15.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:15.736 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:15.736 [2024-12-05 13:38:38.095520] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:15.736 [2024-12-05 13:38:38.096550] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:15.736 [2024-12-05 13:38:38.096591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:15.736 [2024-12-05 13:38:38.204373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:15.736 [2024-12-05 13:38:38.252368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:15.736 [2024-12-05 13:38:38.252411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:15.736 [2024-12-05 13:38:38.252419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:15.736 [2024-12-05 13:38:38.252426] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:15.736 [2024-12-05 13:38:38.252433] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:15.736 [2024-12-05 13:38:38.253111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:15.996 [2024-12-05 13:38:38.322730] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:15.996 [2024-12-05 13:38:38.322994] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:16.570 [2024-12-05 13:38:38.941959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:16.570 Malloc0 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.570 13:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:16.570 [2024-12-05 13:38:39.018066] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1196042 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1196042 /var/tmp/bdevperf.sock 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1196042 ']' 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:34:16.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:16.570 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:16.570 [2024-12-05 13:38:39.084533] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:16.570 [2024-12-05 13:38:39.084596] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196042 ] 00:34:16.832 [2024-12-05 13:38:39.167175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:16.832 [2024-12-05 13:38:39.209162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:17.402 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:17.402 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:34:17.402 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:34:17.402 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.402 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:17.402 NVMe0n1 00:34:17.402 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.402 13:38:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:34:17.664 Running I/O for 10 seconds... 00:34:19.548 8913.00 IOPS, 34.82 MiB/s [2024-12-05T12:38:43.059Z] 9208.50 IOPS, 35.97 MiB/s [2024-12-05T12:38:44.445Z] 9199.00 IOPS, 35.93 MiB/s [2024-12-05T12:38:45.389Z] 9463.50 IOPS, 36.97 MiB/s [2024-12-05T12:38:46.330Z] 10034.00 IOPS, 39.20 MiB/s [2024-12-05T12:38:47.272Z] 10403.17 IOPS, 40.64 MiB/s [2024-12-05T12:38:48.236Z] 10668.29 IOPS, 41.67 MiB/s [2024-12-05T12:38:49.176Z] 10884.75 IOPS, 42.52 MiB/s [2024-12-05T12:38:50.122Z] 11045.44 IOPS, 43.15 MiB/s [2024-12-05T12:38:50.382Z] 11166.40 IOPS, 43.62 MiB/s 00:34:27.814 Latency(us) 00:34:27.814 [2024-12-05T12:38:50.382Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.814 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:34:27.814 Verification LBA range: start 0x0 length 0x4000 00:34:27.814 NVMe0n1 : 10.06 11196.12 43.73 0.00 0.00 91141.00 24576.00 71215.79 00:34:27.814 [2024-12-05T12:38:50.382Z] =================================================================================================================== 00:34:27.814 [2024-12-05T12:38:50.382Z] Total : 11196.12 43.73 0.00 0.00 91141.00 24576.00 71215.79 00:34:27.814 { 00:34:27.814 "results": [ 00:34:27.814 { 00:34:27.814 "job": "NVMe0n1", 00:34:27.814 "core_mask": "0x1", 00:34:27.814 "workload": "verify", 00:34:27.814 "status": "finished", 00:34:27.814 "verify_range": { 00:34:27.814 "start": 0, 00:34:27.814 "length": 16384 00:34:27.814 }, 00:34:27.814 "queue_depth": 1024, 00:34:27.814 "io_size": 4096, 00:34:27.814 "runtime": 10.060631, 00:34:27.814 "iops": 11196.116824083896, 00:34:27.814 "mibps": 43.73483134407772, 00:34:27.814 "io_failed": 0, 00:34:27.814 "io_timeout": 0, 00:34:27.814 "avg_latency_us": 91140.99587878787, 00:34:27.814 "min_latency_us": 24576.0, 00:34:27.814 "max_latency_us": 71215.78666666667 00:34:27.814 } 00:34:27.814 ], 00:34:27.814 "core_count": 1 00:34:27.814 } 00:34:27.814 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1196042 00:34:27.814 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1196042 ']' 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1196042 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196042 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196042' 00:34:27.815 killing process with pid 1196042 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1196042 00:34:27.815 Received shutdown signal, test time was about 10.000000 seconds 00:34:27.815 00:34:27.815 Latency(us) 00:34:27.815 [2024-12-05T12:38:50.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:27.815 [2024-12-05T12:38:50.383Z] =================================================================================================================== 00:34:27.815 [2024-12-05T12:38:50.383Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1196042 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:27.815 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:27.815 rmmod nvme_tcp 00:34:27.815 rmmod nvme_fabrics 00:34:28.076 rmmod nvme_keyring 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1195836 ']' 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1195836 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1195836 ']' 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1195836 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195836 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195836' 00:34:28.076 killing process with pid 1195836 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1195836 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1195836 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.076 13:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.623 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:30.623 00:34:30.623 real 0m23.269s 00:34:30.623 user 0m24.934s 00:34:30.623 sys 0m7.919s 00:34:30.623 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:30.623 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:34:30.623 ************************************ 00:34:30.623 END TEST nvmf_queue_depth 00:34:30.623 ************************************ 00:34:30.623 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:30.623 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:30.623 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:30.623 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:30.623 ************************************ 00:34:30.624 START TEST nvmf_target_multipath 00:34:30.624 ************************************ 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:34:30.624 * Looking for test storage... 00:34:30.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:30.624 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:30.624 --rc genhtml_branch_coverage=1 00:34:30.624 --rc genhtml_function_coverage=1 00:34:30.624 --rc genhtml_legend=1 00:34:30.624 --rc geninfo_all_blocks=1 00:34:30.624 --rc geninfo_unexecuted_blocks=1 00:34:30.624 00:34:30.624 ' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.624 13:38:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.624 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.624 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:30.624 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:30.624 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:30.624 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:30.624 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:30.624 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:34:30.625 13:38:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:38.769 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:38.769 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:38.769 Found net devices under 0000:31:00.0: cvl_0_0 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:38.769 Found net devices under 0000:31:00.1: cvl_0_1 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:38.769 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:38.770 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:39.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:34:39.030 00:34:39.030 --- 10.0.0.2 ping statistics --- 00:34:39.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.030 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:34:39.030 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:39.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:34:39.030 00:34:39.030 --- 10.0.0.1 ping statistics --- 00:34:39.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.030 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:39.292 only one NIC for nvmf test 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:39.292 rmmod nvme_tcp 00:34:39.292 rmmod nvme_fabrics 00:34:39.292 rmmod nvme_keyring 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:39.292 13:39:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:41.836 00:34:41.836 real 0m11.080s 00:34:41.836 user 0m2.427s 00:34:41.836 sys 0m6.606s 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:41.836 ************************************ 00:34:41.836 END TEST nvmf_target_multipath 00:34:41.836 ************************************ 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:41.836 ************************************ 00:34:41.836 START TEST nvmf_zcopy 00:34:41.836 ************************************ 00:34:41.836 13:39:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:41.836 * Looking for test storage... 00:34:41.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:41.836 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.837 --rc genhtml_branch_coverage=1 00:34:41.837 --rc genhtml_function_coverage=1 00:34:41.837 --rc genhtml_legend=1 00:34:41.837 --rc geninfo_all_blocks=1 00:34:41.837 --rc geninfo_unexecuted_blocks=1 00:34:41.837 00:34:41.837 ' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.837 --rc genhtml_branch_coverage=1 00:34:41.837 --rc genhtml_function_coverage=1 00:34:41.837 --rc genhtml_legend=1 00:34:41.837 --rc geninfo_all_blocks=1 00:34:41.837 --rc geninfo_unexecuted_blocks=1 00:34:41.837 00:34:41.837 ' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.837 --rc genhtml_branch_coverage=1 00:34:41.837 --rc genhtml_function_coverage=1 00:34:41.837 --rc genhtml_legend=1 00:34:41.837 --rc geninfo_all_blocks=1 00:34:41.837 --rc geninfo_unexecuted_blocks=1 00:34:41.837 00:34:41.837 ' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:41.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:41.837 --rc genhtml_branch_coverage=1 00:34:41.837 --rc genhtml_function_coverage=1 00:34:41.837 --rc genhtml_legend=1 00:34:41.837 --rc geninfo_all_blocks=1 00:34:41.837 --rc geninfo_unexecuted_blocks=1 00:34:41.837 00:34:41.837 ' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:41.837 13:39:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:49.979 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:34:49.980 Found 0000:31:00.0 (0x8086 - 0x159b) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:34:49.980 Found 0000:31:00.1 (0x8086 - 0x159b) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:34:49.980 Found net devices under 0000:31:00.0: cvl_0_0 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:34:49.980 Found net devices under 0000:31:00.1: cvl_0_1 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:49.980 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.626 ms 00:34:50.241 00:34:50.241 --- 10.0.0.2 ping statistics --- 00:34:50.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.241 rtt min/avg/max/mdev = 0.626/0.626/0.626/0.000 ms 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:34:50.241 00:34:50.241 --- 10.0.0.1 ping statistics --- 00:34:50.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.241 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1208108 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1208108 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1208108 ']' 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.241 13:39:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:50.241 [2024-12-05 13:39:12.765071] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:50.241 [2024-12-05 13:39:12.766237] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:50.241 [2024-12-05 13:39:12.766291] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:50.503 [2024-12-05 13:39:12.875543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:50.503 [2024-12-05 13:39:12.925460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:50.503 [2024-12-05 13:39:12.925510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:50.503 [2024-12-05 13:39:12.925518] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:50.503 [2024-12-05 13:39:12.925526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:50.503 [2024-12-05 13:39:12.925532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:50.503 [2024-12-05 13:39:12.926314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:50.503 [2024-12-05 13:39:13.003337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:50.503 [2024-12-05 13:39:13.003614] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.073 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:51.073 [2024-12-05 13:39:13.635204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:51.333 [2024-12-05 13:39:13.655525] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:51.333 malloc0 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:51.333 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:51.334 { 00:34:51.334 "params": { 00:34:51.334 "name": "Nvme$subsystem", 00:34:51.334 "trtype": "$TEST_TRANSPORT", 00:34:51.334 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.334 "adrfam": "ipv4", 00:34:51.334 "trsvcid": "$NVMF_PORT", 00:34:51.334 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.334 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.334 "hdgst": ${hdgst:-false}, 00:34:51.334 "ddgst": ${ddgst:-false} 00:34:51.334 }, 00:34:51.334 "method": "bdev_nvme_attach_controller" 00:34:51.334 } 00:34:51.334 EOF 00:34:51.334 )") 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:51.334 13:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:51.334 "params": { 00:34:51.334 "name": "Nvme1", 00:34:51.334 "trtype": "tcp", 00:34:51.334 "traddr": "10.0.0.2", 00:34:51.334 "adrfam": "ipv4", 00:34:51.334 "trsvcid": "4420", 00:34:51.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:51.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:51.334 "hdgst": false, 00:34:51.334 "ddgst": false 00:34:51.334 }, 00:34:51.334 "method": "bdev_nvme_attach_controller" 00:34:51.334 }' 00:34:51.334 [2024-12-05 13:39:13.750762] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:34:51.334 [2024-12-05 13:39:13.750838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208440 ] 00:34:51.334 [2024-12-05 13:39:13.833801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:51.334 [2024-12-05 13:39:13.875588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:51.905 Running I/O for 10 seconds... 00:34:53.785 6658.00 IOPS, 52.02 MiB/s [2024-12-05T12:39:17.295Z] 6703.00 IOPS, 52.37 MiB/s [2024-12-05T12:39:18.235Z] 6722.67 IOPS, 52.52 MiB/s [2024-12-05T12:39:19.619Z] 6727.25 IOPS, 52.56 MiB/s [2024-12-05T12:39:20.559Z] 6748.80 IOPS, 52.73 MiB/s [2024-12-05T12:39:21.499Z] 7248.83 IOPS, 56.63 MiB/s [2024-12-05T12:39:22.439Z] 7607.86 IOPS, 59.44 MiB/s [2024-12-05T12:39:23.380Z] 7879.88 IOPS, 61.56 MiB/s [2024-12-05T12:39:24.324Z] 8089.44 IOPS, 63.20 MiB/s [2024-12-05T12:39:24.324Z] 8255.90 IOPS, 64.50 MiB/s 00:35:01.756 Latency(us) 00:35:01.756 [2024-12-05T12:39:24.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:01.756 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:35:01.756 Verification LBA range: start 0x0 length 0x1000 00:35:01.756 Nvme1n1 : 10.01 8258.09 64.52 0.00 0.00 15448.09 1256.11 27306.67 00:35:01.756 [2024-12-05T12:39:24.324Z] =================================================================================================================== 00:35:01.756 [2024-12-05T12:39:24.324Z] Total : 8258.09 64.52 0.00 0.00 15448.09 1256.11 27306.67 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1210418 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:02.022 { 00:35:02.022 "params": { 00:35:02.022 "name": "Nvme$subsystem", 00:35:02.022 "trtype": "$TEST_TRANSPORT", 00:35:02.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:02.022 "adrfam": "ipv4", 00:35:02.022 "trsvcid": "$NVMF_PORT", 00:35:02.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:02.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:02.022 "hdgst": ${hdgst:-false}, 00:35:02.022 "ddgst": ${ddgst:-false} 00:35:02.022 }, 00:35:02.022 "method": "bdev_nvme_attach_controller" 00:35:02.022 } 00:35:02.022 EOF 00:35:02.022 )") 00:35:02.022 [2024-12-05 13:39:24.338715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.338742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:35:02.022 [2024-12-05 13:39:24.346689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.346699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:35:02.022 13:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:02.022 "params": { 00:35:02.022 "name": "Nvme1", 00:35:02.022 "trtype": "tcp", 00:35:02.022 "traddr": "10.0.0.2", 00:35:02.022 "adrfam": "ipv4", 00:35:02.022 "trsvcid": "4420", 00:35:02.022 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:02.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:02.022 "hdgst": false, 00:35:02.022 "ddgst": false 00:35:02.022 }, 00:35:02.022 "method": "bdev_nvme_attach_controller" 00:35:02.022 }' 00:35:02.022 [2024-12-05 13:39:24.354688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.354697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.362687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.362695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.370687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.370696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.382687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.382695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.383140] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:35:02.022 [2024-12-05 13:39:24.383189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1210418 ] 00:35:02.022 [2024-12-05 13:39:24.390687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.390696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.398687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.398695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.406687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.406695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.414687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.414695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.422687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.422695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.430687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.430695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.438687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.438696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.446687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.446695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.454688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.454695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.459277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:02.022 [2024-12-05 13:39:24.462687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.462695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.470688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.470696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.478687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.478695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.486688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.486697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.494528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:02.022 [2024-12-05 13:39:24.494687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.494695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.502687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.502694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.510693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.510703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.518688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.022 [2024-12-05 13:39:24.518699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.022 [2024-12-05 13:39:24.526689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.023 [2024-12-05 13:39:24.526700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.023 [2024-12-05 13:39:24.534687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.023 [2024-12-05 13:39:24.534696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.023 [2024-12-05 13:39:24.542688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.023 [2024-12-05 13:39:24.542696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.023 [2024-12-05 13:39:24.550687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.023 [2024-12-05 13:39:24.550695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.023 [2024-12-05 13:39:24.558687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.023 [2024-12-05 13:39:24.558695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.023 [2024-12-05 13:39:24.566697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.023 [2024-12-05 13:39:24.566711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.023 [2024-12-05 13:39:24.574690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.023 [2024-12-05 13:39:24.574702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.582688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.582697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.590687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.590696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.598687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.598697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.606686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.606694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.614685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.614693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.622686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.622693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.630686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.630693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.638687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.638696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.646686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.646696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.654686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.654694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.662686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.662694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.670686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.670694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.678686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.678693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.686685] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.686692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.694686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.694696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.702687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.702696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.710686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.710701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.718686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.718695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.726686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.726694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.734686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.734695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.742686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.742696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.750686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.750694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.758686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.758693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.766686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.766694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.774686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.774694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.782686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.782695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.790693] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.790707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 [2024-12-05 13:39:24.798687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.798696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.337 Running I/O for 5 seconds... 00:35:02.337 [2024-12-05 13:39:24.811330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.337 [2024-12-05 13:39:24.811346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.338 [2024-12-05 13:39:24.824316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.338 [2024-12-05 13:39:24.824332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.338 [2024-12-05 13:39:24.834562] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.338 [2024-12-05 13:39:24.834577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.338 [2024-12-05 13:39:24.847644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.338 [2024-12-05 13:39:24.847660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.338 [2024-12-05 13:39:24.862052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.338 [2024-12-05 13:39:24.862068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.338 [2024-12-05 13:39:24.875087] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.338 [2024-12-05 13:39:24.875102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.338 [2024-12-05 13:39:24.889532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.338 [2024-12-05 13:39:24.889547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.902420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.902439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.915105] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.915120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.929431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.929446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.942222] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.942236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.955101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.955115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.969978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.969994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.983068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.983083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:24.997670] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:24.997686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.645 [2024-12-05 13:39:25.010721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.645 [2024-12-05 13:39:25.010736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.023569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.023584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.038073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.038088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.050734] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.050749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.064215] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.064230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.078247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.078263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.091207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.091222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.105647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.105662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.118362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.118378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.131228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.131243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.146194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.146209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.159408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.159427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.173295] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.173310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.186356] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.186372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.646 [2024-12-05 13:39:25.199022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.646 [2024-12-05 13:39:25.199036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.214085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.214101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.227297] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.227312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.241992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.242007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.255285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.255300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.270066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.270082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.283002] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.283016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.298085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.298100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.311262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.311277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.325705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.325720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.970 [2024-12-05 13:39:25.338612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.970 [2024-12-05 13:39:25.338626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.351447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.351462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.365949] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.365964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.378661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.378676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.391371] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.391386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.405925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.405940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.419261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.419276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.433641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.433657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.446786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.446802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.459687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.459702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.474150] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.474165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.487199] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.487215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.501402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.501417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.514450] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.514465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:02.971 [2024-12-05 13:39:25.527819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:02.971 [2024-12-05 13:39:25.527835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.539250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.539265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.551039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.551054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.563545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.563561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.575598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.575613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.587209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.587225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.599447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.599462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.611995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.612011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.622866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.622882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.628772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.628787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.642447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.642462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.655530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.655545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.667422] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.667437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.679503] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.679518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.691497] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.691512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.703923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.703938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.231 [2024-12-05 13:39:25.715997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.231 [2024-12-05 13:39:25.716013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.232 [2024-12-05 13:39:25.729770] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.232 [2024-12-05 13:39:25.729785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.232 [2024-12-05 13:39:25.742728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.232 [2024-12-05 13:39:25.742744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.232 [2024-12-05 13:39:25.755580] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.232 [2024-12-05 13:39:25.755596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.232 [2024-12-05 13:39:25.767495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.232 [2024-12-05 13:39:25.767510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.232 [2024-12-05 13:39:25.779724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.232 [2024-12-05 13:39:25.779739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.232 [2024-12-05 13:39:25.792040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.232 [2024-12-05 13:39:25.792056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.802795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.802811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 19127.00 IOPS, 149.43 MiB/s [2024-12-05T12:39:26.061Z] [2024-12-05 13:39:25.809915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.809931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.822430] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.822445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.835194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.835209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.847760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.847776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.859264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.859279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.872228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.872244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.883035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.883050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.896062] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.896078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.906533] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.906548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.919109] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.919123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.931881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.931896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.942758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.942774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.948695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.948710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.957399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.957415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.970326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.970342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.982901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.982918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:25.989285] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:25.989300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:26.002358] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:26.002374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:26.015646] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:26.015661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:26.026714] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:26.026729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:26.032683] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:26.032698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:26.041666] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:26.041681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.493 [2024-12-05 13:39:26.054447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.493 [2024-12-05 13:39:26.054463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.754 [2024-12-05 13:39:26.067085] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.754 [2024-12-05 13:39:26.067100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.754 [2024-12-05 13:39:26.079652] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.754 [2024-12-05 13:39:26.079670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.754 [2024-12-05 13:39:26.090762] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.754 [2024-12-05 13:39:26.090778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.754 [2024-12-05 13:39:26.103716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.754 [2024-12-05 13:39:26.103732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.754 [2024-12-05 13:39:26.115658] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.754 [2024-12-05 13:39:26.115674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.754 [2024-12-05 13:39:26.127402] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.127416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.140001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.140017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.151355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.151370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.163775] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.163790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.175613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.175628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.187903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.187918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.199984] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.200000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.210567] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.210582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.223334] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.223348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.235264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.235278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.247660] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.247674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.259865] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.259880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.270699] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.270714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.276706] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.276721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.284979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.284994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.294500] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.294518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.307181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.307195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:03.755 [2024-12-05 13:39:26.319752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:03.755 [2024-12-05 13:39:26.319767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.331648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.331663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.343924] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.343939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.354539] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.354554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.367433] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.367448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.379555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.379570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.391688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.391702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.403618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.403633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.415637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.415652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.427697] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.427712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.439715] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.439730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.450680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.450695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.463571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.463586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.475327] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.475343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.488020] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.488035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.499645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.499660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.510692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.510707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.516612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.516631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.526116] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.526131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.538834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.538849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.545127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.545142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.558046] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.558061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.570859] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.570877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.016 [2024-12-05 13:39:26.577107] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.016 [2024-12-05 13:39:26.577122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.276 [2024-12-05 13:39:26.589615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.276 [2024-12-05 13:39:26.589631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.276 [2024-12-05 13:39:26.602603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.276 [2024-12-05 13:39:26.602618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.276 [2024-12-05 13:39:26.608931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.276 [2024-12-05 13:39:26.608945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.276 [2024-12-05 13:39:26.621793] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.276 [2024-12-05 13:39:26.621808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.276 [2024-12-05 13:39:26.634641] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.276 [2024-12-05 13:39:26.634656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.276 [2024-12-05 13:39:26.641200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.276 [2024-12-05 13:39:26.641214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.654956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.654970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.667713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.667728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.679803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.679817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.691995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.692010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.702833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.702848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.708637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.708652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.717366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.717384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.730301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.730317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.743235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.743249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.755807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.755821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.767446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.767462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.779551] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.779567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.791661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.791678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.802593] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.802608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 19208.00 IOPS, 150.06 MiB/s [2024-12-05T12:39:26.845Z] [2024-12-05 13:39:26.814057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.814072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.827166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.827180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.277 [2024-12-05 13:39:26.839973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.277 [2024-12-05 13:39:26.839988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.850844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.850861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.856959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.856974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.869760] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.869775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.882411] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.882426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.895413] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.895428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.908096] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.908111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.919349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.919364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.931781] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.931795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.943681] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.943696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.955317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.955331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.967827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.967842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.979613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.979628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:26.991608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:26.991623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.004225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.004240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.015600] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.015614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.027557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.027571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.039918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.039933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.050673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.050688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.063531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.063546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.075326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.075340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.087888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.087903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.538 [2024-12-05 13:39:27.098800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.538 [2024-12-05 13:39:27.098815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.104986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.105002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.117869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.117884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.130765] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.130779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.136997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.137012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.145831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.145845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.158603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.158618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.171362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.171377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.183648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.183663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.195867] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.195881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.207557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.207572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.219299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.219315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.232145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.232161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.243021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.243035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.255400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.255416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.268149] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.268164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.279047] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.279062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.291677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.291692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.303976] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.303992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.314767] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.314782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.320701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.320715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.329294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.329308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.342384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.342400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:04.800 [2024-12-05 13:39:27.355736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:04.800 [2024-12-05 13:39:27.355751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.061 [2024-12-05 13:39:27.367938] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.061 [2024-12-05 13:39:27.367954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.061 [2024-12-05 13:39:27.378702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.061 [2024-12-05 13:39:27.378717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.061 [2024-12-05 13:39:27.391882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.061 [2024-12-05 13:39:27.391897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.061 [2024-12-05 13:39:27.402803] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.061 [2024-12-05 13:39:27.402818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.408647] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.408662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.417271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.417287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.426030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.426046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.438624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.438641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.444746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.444761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.458374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.458390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.471535] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.471550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.483879] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.483895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.494673] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.494689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.507554] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.507569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.519672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.519688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.531958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.531973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.543649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.543665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.555943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.555958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.567856] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.567875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.579787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.579807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.591075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.591090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.603874] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.603889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.616015] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.616030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.062 [2024-12-05 13:39:27.626822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.062 [2024-12-05 13:39:27.626838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.333 [2024-12-05 13:39:27.632906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.333 [2024-12-05 13:39:27.632922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.333 [2024-12-05 13:39:27.646210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.333 [2024-12-05 13:39:27.646225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.333 [2024-12-05 13:39:27.659523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.333 [2024-12-05 13:39:27.659538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.333 [2024-12-05 13:39:27.671824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.333 [2024-12-05 13:39:27.671840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.333 [2024-12-05 13:39:27.683480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.683496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.695889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.695905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.706782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.706797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.712921] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.712937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.721695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.721711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.734378] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.734394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.747076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.747091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.759795] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.759811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.770801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.770816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.776801] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.776817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.785225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.785249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.794143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.794158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.807069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.807085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 19197.67 IOPS, 149.98 MiB/s [2024-12-05T12:39:27.902Z] [2024-12-05 13:39:27.819381] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.819396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.831834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.831849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.844012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.844028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.854826] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.854841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.860613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.860628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.874057] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.874072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.334 [2024-12-05 13:39:27.887023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.334 [2024-12-05 13:39:27.887038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.899854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.899874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.910667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.910682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.916649] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.916664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.925267] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.925283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.933904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.933920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.947077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.947092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.959590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.959605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.969266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.969282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.978065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.978081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:27.991012] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:27.991031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.003988] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.004003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.014807] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.014822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.020657] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.020671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.029399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.029414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.042355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.042370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.055379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.055394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.067621] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.067636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.079815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.079829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.090853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.090872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.096946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.096961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.109999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.110014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.122961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.122975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.135702] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.135716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.147678] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.147694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.595 [2024-12-05 13:39:28.159989] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.595 [2024-12-05 13:39:28.160004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.170638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.170654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.183556] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.183571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.194724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.194739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.200819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.200834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.209571] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.209585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.222441] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.222456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.235135] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.235149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.247822] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.247836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.259576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.259590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.272022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.272037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.282832] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.282847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.288753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.288767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.302279] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.302294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.315408] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.315422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.328205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.328220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.338872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.338886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.344831] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.344846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.353860] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.353879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.366811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.366826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.856 [2024-12-05 13:39:28.373298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.856 [2024-12-05 13:39:28.373312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.857 [2024-12-05 13:39:28.386094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.857 [2024-12-05 13:39:28.386109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.857 [2024-12-05 13:39:28.398529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.857 [2024-12-05 13:39:28.398544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:05.857 [2024-12-05 13:39:28.411648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:05.857 [2024-12-05 13:39:28.411663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.422537] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.422552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.435720] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.435735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.447432] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.447446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.459544] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.459559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.471701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.471716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.483448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.483462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.496075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.496091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.506943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.506957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.519882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.519897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.530852] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.530871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.536773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.117 [2024-12-05 13:39:28.536788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.117 [2024-12-05 13:39:28.545542] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.545557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.558576] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.558591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.571249] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.571264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.583869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.583884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.595182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.595196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.607882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.607897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.619555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.619570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.631931] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.631946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.643547] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.643561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.655759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.655774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.668171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.668186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.118 [2024-12-05 13:39:28.678797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.118 [2024-12-05 13:39:28.678812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.684947] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.684963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.693908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.693923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.706773] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.706788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.713029] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.713044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.726011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.726026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.739210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.739225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.751746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.751761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.763716] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.763732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.775977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.775992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.786906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.786922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.793080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.793095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.805820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.805835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 19219.25 IOPS, 150.15 MiB/s [2024-12-05T12:39:28.947Z] [2024-12-05 13:39:28.818743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.818759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.825228] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.825248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.837914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.837930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.850751] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.850767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.863213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.863228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.875895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.875910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.886849] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.886868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.892872] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.892886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.901577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.901592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.914786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.914801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.921439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.921454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.379 [2024-12-05 13:39:28.934638] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.379 [2024-12-05 13:39:28.934653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:28.947419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:28.947434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:28.960133] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:28.960147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:28.970532] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:28.970547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:28.983382] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:28.983396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:28.995886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:28.995902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.007786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.007802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.018911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.018927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.024952] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.024967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.034128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.034147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.047147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.047163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.059844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.059860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.071902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.071918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.639 [2024-12-05 13:39:29.083304] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.639 [2024-12-05 13:39:29.083319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.096103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.096118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.107483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.107499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.120041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.120056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.130741] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.130757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.143620] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.143635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.155686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.155701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.167798] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.167814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.179866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.179882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.191835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.191850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.640 [2024-12-05 13:39:29.202644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.640 [2024-12-05 13:39:29.202659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.215344] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.215360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.227818] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.227833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.238809] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.238825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.244606] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.244621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.253262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.253282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.262201] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.262216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.274916] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.274933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.281213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.281228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.294395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.294411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.306960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.306975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.319643] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.319658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.331577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.331592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.343667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.343682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.355812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.355828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.366653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.366669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.372563] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.372578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.381439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.381453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.394243] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.394259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.406717] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.406733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.413179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.413195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.426143] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.426158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.438980] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.438995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.451819] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.451834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:06.901 [2024-12-05 13:39:29.462788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:06.901 [2024-12-05 13:39:29.462804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.468912] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.468928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.482004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.482020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.494841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.494857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.501009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.501025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.514376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.514391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.527095] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.527110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.539506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.539521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.551597] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.551613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.563848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.563868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.575824] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.575840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.586736] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.586752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.592549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.592565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.602118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.602133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.615044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.615059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.627754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.161 [2024-12-05 13:39:29.627769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.161 [2024-12-05 13:39:29.639939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.639955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.651241] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.651257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.663510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.663526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.675883] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.675899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.688094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.688109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.698925] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.698940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.704834] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.704849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.713827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.713842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.162 [2024-12-05 13:39:29.726609] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.162 [2024-12-05 13:39:29.726625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.732930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.732946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.741178] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.741193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.749227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.749242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.762229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.762244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.777341] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.777356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.786056] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.786070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.798861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.798878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.805040] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.805054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 19211.00 IOPS, 150.09 MiB/s [2024-12-05T12:39:29.990Z] [2024-12-05 13:39:29.817615] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.817630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 00:35:07.422 Latency(us) 00:35:07.422 [2024-12-05T12:39:29.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:07.422 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:35:07.422 Nvme1n1 : 5.01 19210.94 150.09 0.00 0.00 6655.77 2607.79 12779.52 00:35:07.422 [2024-12-05T12:39:29.990Z] =================================================================================================================== 00:35:07.422 [2024-12-05T12:39:29.990Z] Total : 19210.94 150.09 0.00 0.00 6655.77 2607.79 12779.52 00:35:07.422 [2024-12-05 13:39:29.822692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.822709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.830690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.830704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.838692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.838702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.846692] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.846703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.854691] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.854702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.862690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.862700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.870690] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.870700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.878688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.878698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.886688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.886696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.894688] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.894695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.902687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.902694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.910687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.910696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.918689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.918698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.926687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.926695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 [2024-12-05 13:39:29.934687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:35:07.422 [2024-12-05 13:39:29.934694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:07.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1210418) - No such process 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1210418 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:07.422 delay0 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.422 13:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:35:07.683 [2024-12-05 13:39:30.038672] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:14.262 Initializing NVMe Controllers 00:35:14.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:14.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:35:14.262 Initialization complete. Launching workers. 00:35:14.262 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1135 00:35:14.262 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1415, failed to submit 40 00:35:14.262 success 1277, unsuccessful 138, failed 0 00:35:14.262 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:35:14.262 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:35:14.262 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:14.262 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:35:14.262 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:14.262 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:35:14.262 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:14.263 rmmod nvme_tcp 00:35:14.263 rmmod nvme_fabrics 00:35:14.263 rmmod nvme_keyring 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1208108 ']' 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1208108 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1208108 ']' 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1208108 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1208108 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1208108' 00:35:14.263 killing process with pid 1208108 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1208108 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1208108 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:14.263 13:39:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:16.804 00:35:16.804 real 0m34.834s 00:35:16.804 user 0m43.815s 00:35:16.804 sys 0m12.846s 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:35:16.804 ************************************ 00:35:16.804 END TEST nvmf_zcopy 00:35:16.804 ************************************ 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:16.804 ************************************ 00:35:16.804 START TEST nvmf_nmic 00:35:16.804 ************************************ 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:35:16.804 * Looking for test storage... 00:35:16.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:35:16.804 13:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.804 --rc genhtml_branch_coverage=1 00:35:16.804 --rc genhtml_function_coverage=1 00:35:16.804 --rc genhtml_legend=1 00:35:16.804 --rc geninfo_all_blocks=1 00:35:16.804 --rc geninfo_unexecuted_blocks=1 00:35:16.804 00:35:16.804 ' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.804 --rc genhtml_branch_coverage=1 00:35:16.804 --rc genhtml_function_coverage=1 00:35:16.804 --rc genhtml_legend=1 00:35:16.804 --rc geninfo_all_blocks=1 00:35:16.804 --rc geninfo_unexecuted_blocks=1 00:35:16.804 00:35:16.804 ' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.804 --rc genhtml_branch_coverage=1 00:35:16.804 --rc genhtml_function_coverage=1 00:35:16.804 --rc genhtml_legend=1 00:35:16.804 --rc geninfo_all_blocks=1 00:35:16.804 --rc geninfo_unexecuted_blocks=1 00:35:16.804 00:35:16.804 ' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.804 --rc genhtml_branch_coverage=1 00:35:16.804 --rc genhtml_function_coverage=1 00:35:16.804 --rc genhtml_legend=1 00:35:16.804 --rc geninfo_all_blocks=1 00:35:16.804 --rc geninfo_unexecuted_blocks=1 00:35:16.804 00:35:16.804 ' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.804 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.805 13:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.986 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:24.987 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:24.987 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:24.987 Found net devices under 0000:31:00.0: cvl_0_0 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:24.987 Found net devices under 0000:31:00.1: cvl_0_1 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:24.987 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:24.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:24.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:35:24.987 00:35:24.988 --- 10.0.0.2 ping statistics --- 00:35:24.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.988 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:24.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:24.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:35:24.988 00:35:24.988 --- 10.0.0.1 ping statistics --- 00:35:24.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:24.988 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1217376 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1217376 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1217376 ']' 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:24.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:24.988 13:39:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:24.988 [2024-12-05 13:39:47.503016] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:24.988 [2024-12-05 13:39:47.504158] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:35:24.988 [2024-12-05 13:39:47.504210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.248 [2024-12-05 13:39:47.595208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:25.248 [2024-12-05 13:39:47.637728] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.248 [2024-12-05 13:39:47.637765] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.248 [2024-12-05 13:39:47.637773] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.248 [2024-12-05 13:39:47.637780] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.248 [2024-12-05 13:39:47.637786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.248 [2024-12-05 13:39:47.639388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:25.248 [2024-12-05 13:39:47.639506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:25.248 [2024-12-05 13:39:47.639664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.248 [2024-12-05 13:39:47.639665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:25.248 [2024-12-05 13:39:47.697100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:25.248 [2024-12-05 13:39:47.697118] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:25.248 [2024-12-05 13:39:47.698141] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:25.248 [2024-12-05 13:39:47.698612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:25.248 [2024-12-05 13:39:47.698725] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:25.819 [2024-12-05 13:39:48.348398] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.819 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.079 Malloc0 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.079 [2024-12-05 13:39:48.420321] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:35:26.079 test case1: single bdev can't be used in multiple subsystems 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.079 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.080 [2024-12-05 13:39:48.456059] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:35:26.080 [2024-12-05 13:39:48.456079] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:35:26.080 [2024-12-05 13:39:48.456087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:35:26.080 request: 00:35:26.080 { 00:35:26.080 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:35:26.080 "namespace": { 00:35:26.080 "bdev_name": "Malloc0", 00:35:26.080 "no_auto_visible": false, 00:35:26.080 "hide_metadata": false 00:35:26.080 }, 00:35:26.080 "method": "nvmf_subsystem_add_ns", 00:35:26.080 "req_id": 1 00:35:26.080 } 00:35:26.080 Got JSON-RPC error response 00:35:26.080 response: 00:35:26.080 { 00:35:26.080 "code": -32602, 00:35:26.080 "message": "Invalid parameters" 00:35:26.080 } 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:35:26.080 Adding namespace failed - expected result. 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:35:26.080 test case2: host connect to nvmf target in multiple paths 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:26.080 [2024-12-05 13:39:48.468165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.080 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:26.339 13:39:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:35:26.907 13:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:35:26.907 13:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:35:26.907 13:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:26.907 13:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:26.907 13:39:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:35:28.817 13:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:28.817 13:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:28.817 13:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:28.817 13:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:28.817 13:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:28.817 13:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:35:28.817 13:39:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:29.095 [global] 00:35:29.095 thread=1 00:35:29.095 invalidate=1 00:35:29.095 rw=write 00:35:29.095 time_based=1 00:35:29.095 runtime=1 00:35:29.095 ioengine=libaio 00:35:29.095 direct=1 00:35:29.095 bs=4096 00:35:29.095 iodepth=1 00:35:29.095 norandommap=0 00:35:29.095 numjobs=1 00:35:29.095 00:35:29.095 verify_dump=1 00:35:29.095 verify_backlog=512 00:35:29.095 verify_state_save=0 00:35:29.095 do_verify=1 00:35:29.095 verify=crc32c-intel 00:35:29.095 [job0] 00:35:29.095 filename=/dev/nvme0n1 00:35:29.095 Could not set queue depth (nvme0n1) 00:35:29.362 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:29.362 fio-3.35 00:35:29.362 Starting 1 thread 00:35:30.745 00:35:30.745 job0: (groupid=0, jobs=1): err= 0: pid=1218355: Thu Dec 5 13:39:52 2024 00:35:30.745 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:30.745 slat (nsec): min=7568, max=63909, avg=27292.36, stdev=3573.14 00:35:30.745 clat (usec): min=338, max=1198, avg=890.63, stdev=153.79 00:35:30.745 lat (usec): min=366, max=1226, avg=917.92, stdev=153.63 00:35:30.745 clat percentiles (usec): 00:35:30.745 | 1.00th=[ 441], 5.00th=[ 594], 10.00th=[ 693], 20.00th=[ 742], 00:35:30.745 | 30.00th=[ 824], 40.00th=[ 889], 50.00th=[ 947], 60.00th=[ 979], 00:35:30.745 | 70.00th=[ 996], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:35:30.745 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1205], 99.95th=[ 1205], 00:35:30.745 | 99.99th=[ 1205] 00:35:30.745 write: IOPS=836, BW=3345KiB/s (3425kB/s)(3348KiB/1001msec); 0 zone resets 00:35:30.745 slat (usec): min=9, max=25958, avg=64.64, stdev=896.14 00:35:30.745 clat (usec): min=119, max=900, avg=552.14, stdev=148.84 00:35:30.745 lat (usec): min=129, max=26507, avg=616.78, stdev=908.75 00:35:30.745 clat percentiles (usec): 00:35:30.745 | 1.00th=[ 145], 5.00th=[ 306], 10.00th=[ 392], 20.00th=[ 433], 00:35:30.745 | 30.00th=[ 486], 40.00th=[ 502], 50.00th=[ 537], 60.00th=[ 586], 00:35:30.745 | 70.00th=[ 627], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 799], 00:35:30.745 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 898], 99.95th=[ 898], 00:35:30.745 | 99.99th=[ 898] 00:35:30.745 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:35:30.745 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:30.745 lat (usec) : 250=1.78%, 500=23.72%, 750=38.40%, 1000=25.06% 00:35:30.745 lat (msec) : 2=11.05% 00:35:30.745 cpu : usr=2.40%, sys=6.00%, ctx=1352, majf=0, minf=1 00:35:30.745 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:30.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:30.745 issued rwts: total=512,837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:30.745 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:30.745 00:35:30.745 Run status group 0 (all jobs): 00:35:30.745 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:35:30.745 WRITE: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=3348KiB (3428kB), run=1001-1001msec 00:35:30.745 00:35:30.745 Disk stats (read/write): 00:35:30.745 nvme0n1: ios=537/656, merge=0/0, ticks=1406/306, in_queue=1712, util=98.30% 00:35:30.745 13:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:30.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.745 rmmod nvme_tcp 00:35:30.745 rmmod nvme_fabrics 00:35:30.745 rmmod nvme_keyring 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1217376 ']' 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1217376 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1217376 ']' 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1217376 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.745 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217376 00:35:30.746 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.746 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.746 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217376' 00:35:30.746 killing process with pid 1217376 00:35:30.746 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1217376 00:35:30.746 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1217376 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:31.028 13:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.940 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.940 00:35:32.940 real 0m16.608s 00:35:32.940 user 0m36.787s 00:35:32.940 sys 0m8.179s 00:35:32.940 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:32.940 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:35:32.940 ************************************ 00:35:32.940 END TEST nvmf_nmic 00:35:32.940 ************************************ 00:35:32.940 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:32.940 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:32.940 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:32.940 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:33.201 ************************************ 00:35:33.201 START TEST nvmf_fio_target 00:35:33.201 ************************************ 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:35:33.201 * Looking for test storage... 00:35:33.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:33.201 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.201 --rc genhtml_branch_coverage=1 00:35:33.201 --rc genhtml_function_coverage=1 00:35:33.201 --rc genhtml_legend=1 00:35:33.201 --rc geninfo_all_blocks=1 00:35:33.201 --rc geninfo_unexecuted_blocks=1 00:35:33.201 00:35:33.201 ' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:33.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.202 --rc genhtml_branch_coverage=1 00:35:33.202 --rc genhtml_function_coverage=1 00:35:33.202 --rc genhtml_legend=1 00:35:33.202 --rc geninfo_all_blocks=1 00:35:33.202 --rc geninfo_unexecuted_blocks=1 00:35:33.202 00:35:33.202 ' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:33.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.202 --rc genhtml_branch_coverage=1 00:35:33.202 --rc genhtml_function_coverage=1 00:35:33.202 --rc genhtml_legend=1 00:35:33.202 --rc geninfo_all_blocks=1 00:35:33.202 --rc geninfo_unexecuted_blocks=1 00:35:33.202 00:35:33.202 ' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:33.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.202 --rc genhtml_branch_coverage=1 00:35:33.202 --rc genhtml_function_coverage=1 00:35:33.202 --rc genhtml_legend=1 00:35:33.202 --rc geninfo_all_blocks=1 00:35:33.202 --rc geninfo_unexecuted_blocks=1 00:35:33.202 00:35:33.202 ' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:33.202 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:35:33.463 13:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:41.602 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:35:41.603 Found 0000:31:00.0 (0x8086 - 0x159b) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:35:41.603 Found 0000:31:00.1 (0x8086 - 0x159b) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:35:41.603 Found net devices under 0000:31:00.0: cvl_0_0 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:35:41.603 Found net devices under 0000:31:00.1: cvl_0_1 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:41.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:41.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:35:41.603 00:35:41.603 --- 10.0.0.2 ping statistics --- 00:35:41.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.603 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:41.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:41.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:35:41.603 00:35:41.603 --- 10.0.0.1 ping statistics --- 00:35:41.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:41.603 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1223378 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1223378 00:35:41.603 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:41.604 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1223378 ']' 00:35:41.604 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:41.604 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.604 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:41.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:41.604 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.604 13:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:41.604 [2024-12-05 13:40:03.949015] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:41.604 [2024-12-05 13:40:03.950002] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:35:41.604 [2024-12-05 13:40:03.950037] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:41.604 [2024-12-05 13:40:04.041805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:41.604 [2024-12-05 13:40:04.078190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:41.604 [2024-12-05 13:40:04.078226] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:41.604 [2024-12-05 13:40:04.078234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:41.604 [2024-12-05 13:40:04.078240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:41.604 [2024-12-05 13:40:04.078246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:41.604 [2024-12-05 13:40:04.079745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.604 [2024-12-05 13:40:04.080876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:41.604 [2024-12-05 13:40:04.081120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:41.604 [2024-12-05 13:40:04.081206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:41.604 [2024-12-05 13:40:04.137611] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:41.604 [2024-12-05 13:40:04.137839] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:41.604 [2024-12-05 13:40:04.138874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:41.604 [2024-12-05 13:40:04.139468] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:41.604 [2024-12-05 13:40:04.139552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:42.545 13:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.545 13:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:35:42.545 13:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:42.545 13:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:42.545 13:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:42.545 13:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.545 13:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:42.545 [2024-12-05 13:40:04.945678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.545 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:42.807 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:42.807 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:43.066 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:43.066 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:43.066 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:43.066 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:43.327 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:43.327 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:43.327 13:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:43.587 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:43.587 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:43.847 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:43.847 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:44.108 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:44.108 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:44.108 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:44.368 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:44.368 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:44.628 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:44.628 13:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:44.628 13:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:44.888 [2024-12-05 13:40:07.309847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:44.888 13:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:45.148 13:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:45.148 13:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:45.718 13:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:45.718 13:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:35:45.718 13:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:45.718 13:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:35:45.718 13:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:35:45.718 13:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:35:47.630 13:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:47.630 13:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:47.630 13:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:47.630 13:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:35:47.630 13:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:47.630 13:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:35:47.630 13:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:47.630 [global] 00:35:47.630 thread=1 00:35:47.630 invalidate=1 00:35:47.630 rw=write 00:35:47.630 time_based=1 00:35:47.630 runtime=1 00:35:47.630 ioengine=libaio 00:35:47.630 direct=1 00:35:47.630 bs=4096 00:35:47.631 iodepth=1 00:35:47.631 norandommap=0 00:35:47.631 numjobs=1 00:35:47.631 00:35:47.631 verify_dump=1 00:35:47.631 verify_backlog=512 00:35:47.631 verify_state_save=0 00:35:47.631 do_verify=1 00:35:47.631 verify=crc32c-intel 00:35:47.631 [job0] 00:35:47.631 filename=/dev/nvme0n1 00:35:47.908 [job1] 00:35:47.908 filename=/dev/nvme0n2 00:35:47.908 [job2] 00:35:47.908 filename=/dev/nvme0n3 00:35:47.908 [job3] 00:35:47.908 filename=/dev/nvme0n4 00:35:47.908 Could not set queue depth (nvme0n1) 00:35:47.908 Could not set queue depth (nvme0n2) 00:35:47.908 Could not set queue depth (nvme0n3) 00:35:47.908 Could not set queue depth (nvme0n4) 00:35:48.172 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:48.172 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:48.172 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:48.172 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:48.172 fio-3.35 00:35:48.172 Starting 4 threads 00:35:49.553 00:35:49.553 job0: (groupid=0, jobs=1): err= 0: pid=1224739: Thu Dec 5 13:40:11 2024 00:35:49.553 read: IOPS=15, BW=62.2KiB/s (63.7kB/s)(64.0KiB/1029msec) 00:35:49.553 slat (nsec): min=26585, max=27573, avg=27038.81, stdev=274.49 00:35:49.553 clat (usec): min=40870, max=42083, avg=41598.64, stdev=492.17 00:35:49.553 lat (usec): min=40897, max=42109, avg=41625.68, stdev=492.08 00:35:49.553 clat percentiles (usec): 00:35:49.553 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:49.553 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:49.553 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:49.553 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:49.553 | 99.99th=[42206] 00:35:49.553 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:35:49.553 slat (nsec): min=9944, max=66065, avg=30452.61, stdev=10810.03 00:35:49.553 clat (usec): min=276, max=1220, avg=670.65, stdev=137.60 00:35:49.553 lat (usec): min=289, max=1255, avg=701.10, stdev=141.90 00:35:49.553 clat percentiles (usec): 00:35:49.553 | 1.00th=[ 359], 5.00th=[ 445], 10.00th=[ 486], 20.00th=[ 570], 00:35:49.553 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 676], 60.00th=[ 701], 00:35:49.554 | 70.00th=[ 734], 80.00th=[ 775], 90.00th=[ 848], 95.00th=[ 914], 00:35:49.554 | 99.00th=[ 996], 99.50th=[ 1029], 99.90th=[ 1221], 99.95th=[ 1221], 00:35:49.554 | 99.99th=[ 1221] 00:35:49.554 bw ( KiB/s): min= 4096, max= 4096, per=45.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:49.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:49.554 lat (usec) : 500=11.36%, 750=61.74%, 1000=23.11% 00:35:49.554 lat (msec) : 2=0.76%, 50=3.03% 00:35:49.554 cpu : usr=0.58%, sys=1.56%, ctx=529, majf=0, minf=1 00:35:49.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:49.554 job1: (groupid=0, jobs=1): err= 0: pid=1224750: Thu Dec 5 13:40:11 2024 00:35:49.554 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:49.554 slat (nsec): min=6916, max=59227, avg=24090.88, stdev=5959.11 00:35:49.554 clat (usec): min=657, max=41725, avg=1120.48, stdev=1800.56 00:35:49.554 lat (usec): min=682, max=41737, avg=1144.57, stdev=1800.12 00:35:49.554 clat percentiles (usec): 00:35:49.554 | 1.00th=[ 799], 5.00th=[ 873], 10.00th=[ 922], 20.00th=[ 963], 00:35:49.554 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:35:49.554 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1188], 00:35:49.554 | 99.00th=[ 1270], 99.50th=[ 1270], 99.90th=[41681], 99.95th=[41681], 00:35:49.554 | 99.99th=[41681] 00:35:49.554 write: IOPS=623, BW=2494KiB/s (2553kB/s)(2496KiB/1001msec); 0 zone resets 00:35:49.554 slat (nsec): min=9442, max=64905, avg=28981.44, stdev=9411.90 00:35:49.554 clat (usec): min=283, max=998, avg=620.67, stdev=133.96 00:35:49.554 lat (usec): min=294, max=1009, avg=649.65, stdev=137.70 00:35:49.554 clat percentiles (usec): 00:35:49.554 | 1.00th=[ 302], 5.00th=[ 379], 10.00th=[ 441], 20.00th=[ 490], 00:35:49.554 | 30.00th=[ 562], 40.00th=[ 603], 50.00th=[ 644], 60.00th=[ 676], 00:35:49.554 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:35:49.554 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 996], 00:35:49.554 | 99.99th=[ 996] 00:35:49.554 bw ( KiB/s): min= 4096, max= 4096, per=45.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:49.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:49.554 lat (usec) : 500=11.97%, 750=34.77%, 1000=22.45% 00:35:49.554 lat (msec) : 2=30.72%, 50=0.09% 00:35:49.554 cpu : usr=1.10%, sys=3.70%, ctx=1136, majf=0, minf=1 00:35:49.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 issued rwts: total=512,624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:49.554 job2: (groupid=0, jobs=1): err= 0: pid=1224767: Thu Dec 5 13:40:11 2024 00:35:49.554 read: IOPS=16, BW=67.4KiB/s (69.0kB/s)(68.0KiB/1009msec) 00:35:49.554 slat (nsec): min=27501, max=28500, avg=27829.88, stdev=280.10 00:35:49.554 clat (usec): min=1060, max=42175, avg=39426.48, stdev=9892.77 00:35:49.554 lat (usec): min=1088, max=42202, avg=39454.31, stdev=9892.73 00:35:49.554 clat percentiles (usec): 00:35:49.554 | 1.00th=[ 1057], 5.00th=[ 1057], 10.00th=[41157], 20.00th=[41681], 00:35:49.554 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:49.554 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:49.554 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:49.554 | 99.99th=[42206] 00:35:49.554 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:35:49.554 slat (nsec): min=9460, max=55149, avg=31601.73, stdev=10294.70 00:35:49.554 clat (usec): min=185, max=964, avg=620.95, stdev=117.40 00:35:49.554 lat (usec): min=197, max=1000, avg=652.55, stdev=122.27 00:35:49.554 clat percentiles (usec): 00:35:49.554 | 1.00th=[ 351], 5.00th=[ 400], 10.00th=[ 461], 20.00th=[ 529], 00:35:49.554 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 660], 00:35:49.554 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 799], 00:35:49.554 | 99.00th=[ 857], 99.50th=[ 922], 99.90th=[ 963], 99.95th=[ 963], 00:35:49.554 | 99.99th=[ 963] 00:35:49.554 bw ( KiB/s): min= 4096, max= 4096, per=45.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:49.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:49.554 lat (usec) : 250=0.19%, 500=15.31%, 750=69.75%, 1000=11.53% 00:35:49.554 lat (msec) : 2=0.19%, 50=3.02% 00:35:49.554 cpu : usr=1.59%, sys=1.39%, ctx=530, majf=0, minf=1 00:35:49.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:49.554 job3: (groupid=0, jobs=1): err= 0: pid=1224773: Thu Dec 5 13:40:11 2024 00:35:49.554 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:49.554 slat (nsec): min=7042, max=60430, avg=24933.08, stdev=4647.27 00:35:49.554 clat (usec): min=728, max=41733, avg=1153.67, stdev=1799.49 00:35:49.554 lat (usec): min=735, max=41759, avg=1178.60, stdev=1799.61 00:35:49.554 clat percentiles (usec): 00:35:49.554 | 1.00th=[ 791], 5.00th=[ 881], 10.00th=[ 947], 20.00th=[ 1004], 00:35:49.554 | 30.00th=[ 1045], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:35:49.554 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:35:49.554 | 99.00th=[ 1287], 99.50th=[ 1336], 99.90th=[41681], 99.95th=[41681], 00:35:49.554 | 99.99th=[41681] 00:35:49.554 write: IOPS=671, BW=2685KiB/s (2750kB/s)(2688KiB/1001msec); 0 zone resets 00:35:49.554 slat (nsec): min=9552, max=54465, avg=22282.25, stdev=11168.13 00:35:49.554 clat (usec): min=252, max=958, avg=556.04, stdev=121.25 00:35:49.554 lat (usec): min=262, max=974, avg=578.32, stdev=122.98 00:35:49.554 clat percentiles (usec): 00:35:49.554 | 1.00th=[ 343], 5.00th=[ 371], 10.00th=[ 396], 20.00th=[ 453], 00:35:49.554 | 30.00th=[ 486], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 594], 00:35:49.554 | 70.00th=[ 619], 80.00th=[ 660], 90.00th=[ 717], 95.00th=[ 766], 00:35:49.554 | 99.00th=[ 873], 99.50th=[ 881], 99.90th=[ 963], 99.95th=[ 963], 00:35:49.554 | 99.99th=[ 963] 00:35:49.554 bw ( KiB/s): min= 4096, max= 4096, per=45.42%, avg=4096.00, stdev= 0.00, samples=1 00:35:49.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:49.554 lat (usec) : 500=19.93%, 750=33.11%, 1000=11.99% 00:35:49.554 lat (msec) : 2=34.88%, 50=0.08% 00:35:49.554 cpu : usr=1.50%, sys=2.90%, ctx=1184, majf=0, minf=1 00:35:49.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:49.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:49.554 issued rwts: total=512,672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:49.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:49.554 00:35:49.554 Run status group 0 (all jobs): 00:35:49.554 READ: bw=4109KiB/s (4207kB/s), 62.2KiB/s-2046KiB/s (63.7kB/s-2095kB/s), io=4228KiB (4329kB), run=1001-1029msec 00:35:49.554 WRITE: bw=9018KiB/s (9235kB/s), 1990KiB/s-2685KiB/s (2038kB/s-2750kB/s), io=9280KiB (9503kB), run=1001-1029msec 00:35:49.554 00:35:49.554 Disk stats (read/write): 00:35:49.554 nvme0n1: ios=36/512, merge=0/0, ticks=1426/327, in_queue=1753, util=96.59% 00:35:49.554 nvme0n2: ios=462/512, merge=0/0, ticks=516/307, in_queue=823, util=87.54% 00:35:49.554 nvme0n3: ios=37/512, merge=0/0, ticks=1370/257, in_queue=1627, util=96.61% 00:35:49.554 nvme0n4: ios=470/512, merge=0/0, ticks=712/278, in_queue=990, util=91.20% 00:35:49.555 13:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:49.555 [global] 00:35:49.555 thread=1 00:35:49.555 invalidate=1 00:35:49.555 rw=randwrite 00:35:49.555 time_based=1 00:35:49.555 runtime=1 00:35:49.555 ioengine=libaio 00:35:49.555 direct=1 00:35:49.555 bs=4096 00:35:49.555 iodepth=1 00:35:49.555 norandommap=0 00:35:49.555 numjobs=1 00:35:49.555 00:35:49.555 verify_dump=1 00:35:49.555 verify_backlog=512 00:35:49.555 verify_state_save=0 00:35:49.555 do_verify=1 00:35:49.555 verify=crc32c-intel 00:35:49.555 [job0] 00:35:49.555 filename=/dev/nvme0n1 00:35:49.555 [job1] 00:35:49.555 filename=/dev/nvme0n2 00:35:49.555 [job2] 00:35:49.555 filename=/dev/nvme0n3 00:35:49.555 [job3] 00:35:49.555 filename=/dev/nvme0n4 00:35:49.555 Could not set queue depth (nvme0n1) 00:35:49.555 Could not set queue depth (nvme0n2) 00:35:49.555 Could not set queue depth (nvme0n3) 00:35:49.555 Could not set queue depth (nvme0n4) 00:35:49.866 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:49.866 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:49.866 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:49.866 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:49.866 fio-3.35 00:35:49.866 Starting 4 threads 00:35:51.249 00:35:51.249 job0: (groupid=0, jobs=1): err= 0: pid=1225190: Thu Dec 5 13:40:13 2024 00:35:51.249 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:51.249 slat (nsec): min=6382, max=63408, avg=25495.30, stdev=6714.83 00:35:51.249 clat (usec): min=273, max=1173, avg=777.61, stdev=112.00 00:35:51.249 lat (usec): min=300, max=1199, avg=803.11, stdev=113.22 00:35:51.249 clat percentiles (usec): 00:35:51.249 | 1.00th=[ 529], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 693], 00:35:51.249 | 30.00th=[ 725], 40.00th=[ 750], 50.00th=[ 783], 60.00th=[ 824], 00:35:51.249 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 930], 00:35:51.249 | 99.00th=[ 1012], 99.50th=[ 1074], 99.90th=[ 1172], 99.95th=[ 1172], 00:35:51.249 | 99.99th=[ 1172] 00:35:51.249 write: IOPS=1011, BW=4048KiB/s (4145kB/s)(4052KiB/1001msec); 0 zone resets 00:35:51.249 slat (nsec): min=8627, max=76035, avg=29865.18, stdev=9478.48 00:35:51.249 clat (usec): min=148, max=872, avg=540.41, stdev=128.04 00:35:51.249 lat (usec): min=161, max=905, avg=570.27, stdev=131.99 00:35:51.249 clat percentiles (usec): 00:35:51.249 | 1.00th=[ 233], 5.00th=[ 322], 10.00th=[ 367], 20.00th=[ 437], 00:35:51.249 | 30.00th=[ 469], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 578], 00:35:51.249 | 70.00th=[ 611], 80.00th=[ 660], 90.00th=[ 701], 95.00th=[ 734], 00:35:51.249 | 99.00th=[ 799], 99.50th=[ 816], 99.90th=[ 865], 99.95th=[ 873], 00:35:51.249 | 99.99th=[ 873] 00:35:51.249 bw ( KiB/s): min= 4096, max= 4096, per=41.14%, avg=4096.00, stdev= 0.00, samples=1 00:35:51.249 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:51.249 lat (usec) : 250=1.05%, 500=24.79%, 750=51.48%, 1000=22.30% 00:35:51.249 lat (msec) : 2=0.39% 00:35:51.249 cpu : usr=3.00%, sys=5.90%, ctx=1527, majf=0, minf=1 00:35:51.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 issued rwts: total=512,1013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.250 job1: (groupid=0, jobs=1): err= 0: pid=1225196: Thu Dec 5 13:40:13 2024 00:35:51.250 read: IOPS=372, BW=1488KiB/s (1524kB/s)(1524KiB/1024msec) 00:35:51.250 slat (nsec): min=6810, max=60336, avg=24464.77, stdev=6476.93 00:35:51.250 clat (usec): min=259, max=42101, avg=1730.22, stdev=5818.65 00:35:51.250 lat (usec): min=278, max=42127, avg=1754.68, stdev=5818.95 00:35:51.250 clat percentiles (usec): 00:35:51.250 | 1.00th=[ 343], 5.00th=[ 545], 10.00th=[ 619], 20.00th=[ 701], 00:35:51.250 | 30.00th=[ 742], 40.00th=[ 791], 50.00th=[ 832], 60.00th=[ 865], 00:35:51.250 | 70.00th=[ 1029], 80.00th=[ 1188], 90.00th=[ 1254], 95.00th=[ 1319], 00:35:51.250 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:35:51.250 | 99.99th=[42206] 00:35:51.250 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:35:51.250 slat (nsec): min=9605, max=53386, avg=29956.37, stdev=8187.39 00:35:51.250 clat (usec): min=225, max=1134, avg=643.70, stdev=124.79 00:35:51.250 lat (usec): min=235, max=1167, avg=673.65, stdev=128.00 00:35:51.250 clat percentiles (usec): 00:35:51.250 | 1.00th=[ 363], 5.00th=[ 433], 10.00th=[ 478], 20.00th=[ 529], 00:35:51.250 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:35:51.250 | 70.00th=[ 717], 80.00th=[ 750], 90.00th=[ 791], 95.00th=[ 824], 00:35:51.250 | 99.00th=[ 938], 99.50th=[ 955], 99.90th=[ 1139], 99.95th=[ 1139], 00:35:51.250 | 99.99th=[ 1139] 00:35:51.250 bw ( KiB/s): min= 4096, max= 4096, per=41.14%, avg=4096.00, stdev= 0.00, samples=1 00:35:51.250 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:51.250 lat (usec) : 250=0.11%, 500=9.63%, 750=49.27%, 1000=28.00% 00:35:51.250 lat (msec) : 2=12.09%, 50=0.90% 00:35:51.250 cpu : usr=1.47%, sys=2.35%, ctx=895, majf=0, minf=1 00:35:51.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 issued rwts: total=381,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.250 job2: (groupid=0, jobs=1): err= 0: pid=1225223: Thu Dec 5 13:40:13 2024 00:35:51.250 read: IOPS=17, BW=70.6KiB/s (72.3kB/s)(72.0KiB/1020msec) 00:35:51.250 slat (nsec): min=24924, max=25655, avg=25173.67, stdev=191.54 00:35:51.250 clat (usec): min=1017, max=42102, avg=39568.27, stdev=9625.96 00:35:51.250 lat (usec): min=1042, max=42127, avg=39593.45, stdev=9625.89 00:35:51.250 clat percentiles (usec): 00:35:51.250 | 1.00th=[ 1020], 5.00th=[ 1020], 10.00th=[41157], 20.00th=[41681], 00:35:51.250 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:51.250 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:51.250 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:51.250 | 99.99th=[42206] 00:35:51.250 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:35:51.250 slat (nsec): min=9166, max=51529, avg=27850.05, stdev=8983.15 00:35:51.250 clat (usec): min=170, max=890, avg=565.18, stdev=133.95 00:35:51.250 lat (usec): min=182, max=922, avg=593.03, stdev=138.19 00:35:51.250 clat percentiles (usec): 00:35:51.250 | 1.00th=[ 265], 5.00th=[ 306], 10.00th=[ 379], 20.00th=[ 453], 00:35:51.250 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 570], 60.00th=[ 603], 00:35:51.250 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 734], 95.00th=[ 783], 00:35:51.250 | 99.00th=[ 840], 99.50th=[ 873], 99.90th=[ 889], 99.95th=[ 889], 00:35:51.250 | 99.99th=[ 889] 00:35:51.250 bw ( KiB/s): min= 4096, max= 4096, per=41.14%, avg=4096.00, stdev= 0.00, samples=1 00:35:51.250 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:51.250 lat (usec) : 250=0.57%, 500=25.85%, 750=63.40%, 1000=6.79% 00:35:51.250 lat (msec) : 2=0.19%, 50=3.21% 00:35:51.250 cpu : usr=0.79%, sys=1.37%, ctx=531, majf=0, minf=1 00:35:51.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.250 job3: (groupid=0, jobs=1): err= 0: pid=1225235: Thu Dec 5 13:40:13 2024 00:35:51.250 read: IOPS=17, BW=70.5KiB/s (72.2kB/s)(72.0KiB/1021msec) 00:35:51.250 slat (nsec): min=27223, max=27687, avg=27514.61, stdev=134.91 00:35:51.250 clat (usec): min=40785, max=41927, avg=41239.75, stdev=419.83 00:35:51.250 lat (usec): min=40813, max=41954, avg=41267.26, stdev=419.82 00:35:51.250 clat percentiles (usec): 00:35:51.250 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:35:51.250 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:51.250 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:35:51.250 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:51.250 | 99.99th=[41681] 00:35:51.250 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:35:51.250 slat (nsec): min=8465, max=54172, avg=30259.41, stdev=9458.06 00:35:51.250 clat (usec): min=181, max=861, avg=505.34, stdev=118.88 00:35:51.250 lat (usec): min=215, max=894, avg=535.60, stdev=122.55 00:35:51.250 clat percentiles (usec): 00:35:51.250 | 1.00th=[ 273], 5.00th=[ 310], 10.00th=[ 338], 20.00th=[ 383], 00:35:51.250 | 30.00th=[ 437], 40.00th=[ 486], 50.00th=[ 523], 60.00th=[ 545], 00:35:51.250 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 652], 95.00th=[ 685], 00:35:51.250 | 99.00th=[ 766], 99.50th=[ 791], 99.90th=[ 865], 99.95th=[ 865], 00:35:51.250 | 99.99th=[ 865] 00:35:51.250 bw ( KiB/s): min= 4096, max= 4096, per=41.14%, avg=4096.00, stdev= 0.00, samples=1 00:35:51.250 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:51.250 lat (usec) : 250=0.57%, 500=42.64%, 750=51.89%, 1000=1.51% 00:35:51.250 lat (msec) : 50=3.40% 00:35:51.250 cpu : usr=0.88%, sys=2.16%, ctx=530, majf=0, minf=1 00:35:51.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:51.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:51.250 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:51.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:51.250 00:35:51.250 Run status group 0 (all jobs): 00:35:51.250 READ: bw=3629KiB/s (3716kB/s), 70.5KiB/s-2046KiB/s (72.2kB/s-2095kB/s), io=3716KiB (3805kB), run=1001-1024msec 00:35:51.250 WRITE: bw=9957KiB/s (10.2MB/s), 2000KiB/s-4048KiB/s (2048kB/s-4145kB/s), io=9.96MiB (10.4MB), run=1001-1024msec 00:35:51.250 00:35:51.250 Disk stats (read/write): 00:35:51.250 nvme0n1: ios=562/639, merge=0/0, ticks=597/276, in_queue=873, util=85.77% 00:35:51.250 nvme0n2: ios=374/512, merge=0/0, ticks=1370/321, in_queue=1691, util=97.92% 00:35:51.250 nvme0n3: ios=12/512, merge=0/0, ticks=463/282, in_queue=745, util=86.68% 00:35:51.250 nvme0n4: ios=12/512, merge=0/0, ticks=494/215, in_queue=709, util=88.85% 00:35:51.250 13:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:51.250 [global] 00:35:51.250 thread=1 00:35:51.250 invalidate=1 00:35:51.250 rw=write 00:35:51.250 time_based=1 00:35:51.250 runtime=1 00:35:51.250 ioengine=libaio 00:35:51.250 direct=1 00:35:51.250 bs=4096 00:35:51.250 iodepth=128 00:35:51.250 norandommap=0 00:35:51.250 numjobs=1 00:35:51.250 00:35:51.250 verify_dump=1 00:35:51.250 verify_backlog=512 00:35:51.250 verify_state_save=0 00:35:51.250 do_verify=1 00:35:51.250 verify=crc32c-intel 00:35:51.250 [job0] 00:35:51.250 filename=/dev/nvme0n1 00:35:51.250 [job1] 00:35:51.250 filename=/dev/nvme0n2 00:35:51.250 [job2] 00:35:51.250 filename=/dev/nvme0n3 00:35:51.250 [job3] 00:35:51.250 filename=/dev/nvme0n4 00:35:51.250 Could not set queue depth (nvme0n1) 00:35:51.250 Could not set queue depth (nvme0n2) 00:35:51.250 Could not set queue depth (nvme0n3) 00:35:51.250 Could not set queue depth (nvme0n4) 00:35:51.511 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:51.511 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:51.511 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:51.511 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:51.511 fio-3.35 00:35:51.511 Starting 4 threads 00:35:52.890 00:35:52.890 job0: (groupid=0, jobs=1): err= 0: pid=1225690: Thu Dec 5 13:40:15 2024 00:35:52.890 read: IOPS=8728, BW=34.1MiB/s (35.8MB/s)(34.3MiB/1006msec) 00:35:52.890 slat (nsec): min=939, max=6868.0k, avg=53881.81, stdev=424894.08 00:35:52.890 clat (usec): min=3048, max=18138, avg=7256.35, stdev=1873.99 00:35:52.890 lat (usec): min=3053, max=18148, avg=7310.23, stdev=1906.73 00:35:52.890 clat percentiles (usec): 00:35:52.890 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5604], 20.00th=[ 5866], 00:35:52.890 | 30.00th=[ 6128], 40.00th=[ 6390], 50.00th=[ 6652], 60.00th=[ 7046], 00:35:52.890 | 70.00th=[ 7570], 80.00th=[ 8455], 90.00th=[10290], 95.00th=[11076], 00:35:52.890 | 99.00th=[12780], 99.50th=[13566], 99.90th=[18220], 99.95th=[18220], 00:35:52.890 | 99.99th=[18220] 00:35:52.890 write: IOPS=9161, BW=35.8MiB/s (37.5MB/s)(36.0MiB/1006msec); 0 zone resets 00:35:52.890 slat (nsec): min=1632, max=20377k, avg=52477.42, stdev=452007.61 00:35:52.890 clat (usec): min=1997, max=33038, avg=6925.44, stdev=2825.61 00:35:52.890 lat (usec): min=2005, max=33072, avg=6977.91, stdev=2850.16 00:35:52.891 clat percentiles (usec): 00:35:52.891 | 1.00th=[ 3261], 5.00th=[ 3916], 10.00th=[ 4178], 20.00th=[ 4817], 00:35:52.891 | 30.00th=[ 5669], 40.00th=[ 5997], 50.00th=[ 6456], 60.00th=[ 6849], 00:35:52.891 | 70.00th=[ 7373], 80.00th=[ 8160], 90.00th=[ 9896], 95.00th=[12125], 00:35:52.891 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21890], 99.95th=[21890], 00:35:52.891 | 99.99th=[33162] 00:35:52.891 bw ( KiB/s): min=36472, max=36856, per=36.37%, avg=36664.00, stdev=271.53, samples=2 00:35:52.891 iops : min= 9118, max= 9214, avg=9166.00, stdev=67.88, samples=2 00:35:52.891 lat (msec) : 2=0.01%, 4=3.47%, 10=85.90%, 20=9.92%, 50=0.71% 00:35:52.891 cpu : usr=5.57%, sys=9.95%, ctx=468, majf=0, minf=1 00:35:52.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:52.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:52.891 issued rwts: total=8781,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:52.891 job1: (groupid=0, jobs=1): err= 0: pid=1225695: Thu Dec 5 13:40:15 2024 00:35:52.891 read: IOPS=5224, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1006msec) 00:35:52.891 slat (nsec): min=893, max=18563k, avg=95523.28, stdev=874075.33 00:35:52.891 clat (usec): min=1900, max=70699, avg=13147.27, stdev=9340.67 00:35:52.891 lat (usec): min=3266, max=70705, avg=13242.80, stdev=9413.68 00:35:52.891 clat percentiles (usec): 00:35:52.891 | 1.00th=[ 3916], 5.00th=[ 5866], 10.00th=[ 6521], 20.00th=[ 7373], 00:35:52.891 | 30.00th=[ 8455], 40.00th=[ 9765], 50.00th=[11338], 60.00th=[12649], 00:35:52.891 | 70.00th=[13698], 80.00th=[15139], 90.00th=[19268], 95.00th=[26346], 00:35:52.891 | 99.00th=[55313], 99.50th=[61604], 99.90th=[70779], 99.95th=[70779], 00:35:52.891 | 99.99th=[70779] 00:35:52.891 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:35:52.891 slat (nsec): min=1539, max=12027k, avg=76636.43, stdev=617409.10 00:35:52.891 clat (usec): min=581, max=52990, avg=10391.16, stdev=7146.33 00:35:52.891 lat (usec): min=727, max=53001, avg=10467.79, stdev=7200.65 00:35:52.891 clat percentiles (usec): 00:35:52.891 | 1.00th=[ 2040], 5.00th=[ 4015], 10.00th=[ 5014], 20.00th=[ 6194], 00:35:52.891 | 30.00th=[ 6783], 40.00th=[ 7373], 50.00th=[ 8979], 60.00th=[10683], 00:35:52.891 | 70.00th=[11207], 80.00th=[12649], 90.00th=[14746], 95.00th=[19530], 00:35:52.891 | 99.00th=[45876], 99.50th=[49546], 99.90th=[53216], 99.95th=[53216], 00:35:52.891 | 99.99th=[53216] 00:35:52.891 bw ( KiB/s): min=20480, max=24576, per=22.35%, avg=22528.00, stdev=2896.31, samples=2 00:35:52.891 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:35:52.891 lat (usec) : 750=0.02%, 1000=0.14% 00:35:52.891 lat (msec) : 2=0.28%, 4=2.60%, 10=45.49%, 20=45.27%, 50=5.03% 00:35:52.891 lat (msec) : 100=1.17% 00:35:52.891 cpu : usr=3.38%, sys=5.77%, ctx=335, majf=0, minf=1 00:35:52.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:52.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:52.891 issued rwts: total=5256,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:52.891 job2: (groupid=0, jobs=1): err= 0: pid=1225704: Thu Dec 5 13:40:15 2024 00:35:52.891 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:35:52.891 slat (nsec): min=910, max=20581k, avg=100388.19, stdev=708504.05 00:35:52.891 clat (usec): min=4153, max=65092, avg=13102.44, stdev=9180.25 00:35:52.891 lat (usec): min=4161, max=65097, avg=13202.82, stdev=9237.64 00:35:52.891 clat percentiles (usec): 00:35:52.891 | 1.00th=[ 6194], 5.00th=[ 8094], 10.00th=[ 8455], 20.00th=[ 9896], 00:35:52.891 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10683], 60.00th=[11076], 00:35:52.891 | 70.00th=[11207], 80.00th=[11600], 90.00th=[14615], 95.00th=[42730], 00:35:52.891 | 99.00th=[49546], 99.50th=[52167], 99.90th=[61604], 99.95th=[62653], 00:35:52.891 | 99.99th=[65274] 00:35:52.891 write: IOPS=5360, BW=20.9MiB/s (22.0MB/s)(21.0MiB/1005msec); 0 zone resets 00:35:52.891 slat (nsec): min=1554, max=20039k, avg=87114.02, stdev=633265.60 00:35:52.891 clat (usec): min=1231, max=59251, avg=11219.76, stdev=6288.14 00:35:52.891 lat (usec): min=1244, max=62589, avg=11306.87, stdev=6331.35 00:35:52.891 clat percentiles (usec): 00:35:52.891 | 1.00th=[ 5538], 5.00th=[ 7701], 10.00th=[ 7898], 20.00th=[ 8225], 00:35:52.891 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:35:52.891 | 70.00th=[10683], 80.00th=[11338], 90.00th=[15664], 95.00th=[20055], 00:35:52.891 | 99.00th=[45876], 99.50th=[46400], 99.90th=[46400], 99.95th=[46924], 00:35:52.891 | 99.99th=[59507] 00:35:52.891 bw ( KiB/s): min=17496, max=24576, per=20.87%, avg=21036.00, stdev=5006.32, samples=2 00:35:52.891 iops : min= 4374, max= 6144, avg=5259.00, stdev=1251.58, samples=2 00:35:52.891 lat (msec) : 2=0.02%, 4=0.10%, 10=40.87%, 20=51.99%, 50=6.66% 00:35:52.891 lat (msec) : 100=0.36% 00:35:52.891 cpu : usr=2.89%, sys=2.69%, ctx=483, majf=0, minf=2 00:35:52.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:52.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:52.891 issued rwts: total=5120,5387,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:52.891 job3: (groupid=0, jobs=1): err= 0: pid=1225708: Thu Dec 5 13:40:15 2024 00:35:52.891 read: IOPS=4938, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1006msec) 00:35:52.891 slat (nsec): min=930, max=14953k, avg=97983.70, stdev=888630.62 00:35:52.891 clat (usec): min=1602, max=33726, avg=14115.81, stdev=4167.22 00:35:52.891 lat (usec): min=1609, max=33773, avg=14213.79, stdev=4245.89 00:35:52.891 clat percentiles (usec): 00:35:52.891 | 1.00th=[ 5538], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[10945], 00:35:52.891 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13566], 60.00th=[14615], 00:35:52.891 | 70.00th=[15270], 80.00th=[17171], 90.00th=[20317], 95.00th=[21365], 00:35:52.891 | 99.00th=[25822], 99.50th=[27657], 99.90th=[31327], 99.95th=[33424], 00:35:52.891 | 99.99th=[33817] 00:35:52.891 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:35:52.891 slat (nsec): min=1643, max=11728k, avg=80406.57, stdev=698868.49 00:35:52.891 clat (usec): min=507, max=32029, avg=11236.48, stdev=5098.46 00:35:52.891 lat (usec): min=541, max=32033, avg=11316.89, stdev=5135.57 00:35:52.891 clat percentiles (usec): 00:35:52.891 | 1.00th=[ 1483], 5.00th=[ 3982], 10.00th=[ 5145], 20.00th=[ 7767], 00:35:52.891 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10290], 60.00th=[11469], 00:35:52.891 | 70.00th=[13042], 80.00th=[14484], 90.00th=[17957], 95.00th=[21103], 00:35:52.891 | 99.00th=[26870], 99.50th=[28705], 99.90th=[30802], 99.95th=[32113], 00:35:52.891 | 99.99th=[32113] 00:35:52.891 bw ( KiB/s): min=20480, max=20480, per=20.31%, avg=20480.00, stdev= 0.00, samples=2 00:35:52.891 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:35:52.891 lat (usec) : 750=0.07%, 1000=0.03% 00:35:52.891 lat (msec) : 2=1.36%, 4=1.36%, 10=27.88%, 20=60.79%, 50=8.52% 00:35:52.891 cpu : usr=4.48%, sys=4.98%, ctx=281, majf=0, minf=1 00:35:52.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:52.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:52.891 issued rwts: total=4968,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:52.891 00:35:52.891 Run status group 0 (all jobs): 00:35:52.891 READ: bw=93.7MiB/s (98.2MB/s), 19.3MiB/s-34.1MiB/s (20.2MB/s-35.8MB/s), io=94.2MiB (98.8MB), run=1005-1006msec 00:35:52.891 WRITE: bw=98.5MiB/s (103MB/s), 19.9MiB/s-35.8MiB/s (20.8MB/s-37.5MB/s), io=99.0MiB (104MB), run=1005-1006msec 00:35:52.891 00:35:52.891 Disk stats (read/write): 00:35:52.891 nvme0n1: ios=7461/7680, merge=0/0, ticks=50277/49110, in_queue=99387, util=87.47% 00:35:52.891 nvme0n2: ios=4752/5120, merge=0/0, ticks=49541/48913, in_queue=98454, util=86.33% 00:35:52.891 nvme0n3: ios=4463/4608, merge=0/0, ticks=16715/16842, in_queue=33557, util=87.84% 00:35:52.891 nvme0n4: ios=4126/4184, merge=0/0, ticks=52645/43242, in_queue=95887, util=91.86% 00:35:52.891 13:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:52.891 [global] 00:35:52.891 thread=1 00:35:52.891 invalidate=1 00:35:52.891 rw=randwrite 00:35:52.891 time_based=1 00:35:52.891 runtime=1 00:35:52.891 ioengine=libaio 00:35:52.891 direct=1 00:35:52.891 bs=4096 00:35:52.891 iodepth=128 00:35:52.891 norandommap=0 00:35:52.891 numjobs=1 00:35:52.891 00:35:52.891 verify_dump=1 00:35:52.891 verify_backlog=512 00:35:52.891 verify_state_save=0 00:35:52.891 do_verify=1 00:35:52.891 verify=crc32c-intel 00:35:52.891 [job0] 00:35:52.891 filename=/dev/nvme0n1 00:35:52.891 [job1] 00:35:52.891 filename=/dev/nvme0n2 00:35:52.891 [job2] 00:35:52.891 filename=/dev/nvme0n3 00:35:52.891 [job3] 00:35:52.891 filename=/dev/nvme0n4 00:35:52.891 Could not set queue depth (nvme0n1) 00:35:52.891 Could not set queue depth (nvme0n2) 00:35:52.891 Could not set queue depth (nvme0n3) 00:35:52.891 Could not set queue depth (nvme0n4) 00:35:53.150 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:53.150 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:53.150 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:53.150 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:53.150 fio-3.35 00:35:53.150 Starting 4 threads 00:35:54.534 00:35:54.534 job0: (groupid=0, jobs=1): err= 0: pid=1226214: Thu Dec 5 13:40:16 2024 00:35:54.534 read: IOPS=8118, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1009msec) 00:35:54.534 slat (nsec): min=938, max=42866k, avg=61947.59, stdev=707574.77 00:35:54.534 clat (usec): min=2397, max=44041, avg=7722.47, stdev=3440.43 00:35:54.534 lat (usec): min=2400, max=82641, avg=7784.42, stdev=3579.36 00:35:54.534 clat percentiles (usec): 00:35:54.534 | 1.00th=[ 4047], 5.00th=[ 4817], 10.00th=[ 5145], 20.00th=[ 5669], 00:35:54.534 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 7111], 60.00th=[ 7767], 00:35:54.534 | 70.00th=[ 8356], 80.00th=[ 9372], 90.00th=[10552], 95.00th=[12125], 00:35:54.534 | 99.00th=[14484], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:35:54.534 | 99.99th=[44303] 00:35:54.534 write: IOPS=8179, BW=32.0MiB/s (33.5MB/s)(32.2MiB/1009msec); 0 zone resets 00:35:54.534 slat (nsec): min=1564, max=22715k, avg=51651.43, stdev=488918.82 00:35:54.534 clat (usec): min=1175, max=105300, avg=7841.38, stdev=10816.68 00:35:54.534 lat (usec): min=1185, max=105307, avg=7893.03, stdev=10857.69 00:35:54.534 clat percentiles (msec): 00:35:54.534 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 6], 00:35:54.534 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:35:54.534 | 70.00th=[ 7], 80.00th=[ 8], 90.00th=[ 10], 95.00th=[ 11], 00:35:54.534 | 99.00th=[ 81], 99.50th=[ 94], 99.90th=[ 106], 99.95th=[ 106], 00:35:54.534 | 99.99th=[ 106] 00:35:54.534 bw ( KiB/s): min=24576, max=40960, per=41.35%, avg=32768.00, stdev=11585.24, samples=2 00:35:54.534 iops : min= 6144, max=10240, avg=8192.00, stdev=2896.31, samples=2 00:35:54.534 lat (msec) : 2=0.29%, 4=4.43%, 10=84.69%, 20=8.94%, 50=0.67% 00:35:54.534 lat (msec) : 100=0.78%, 250=0.19% 00:35:54.534 cpu : usr=4.96%, sys=7.64%, ctx=458, majf=0, minf=1 00:35:54.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:54.534 issued rwts: total=8192,8253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:54.534 job1: (groupid=0, jobs=1): err= 0: pid=1226216: Thu Dec 5 13:40:16 2024 00:35:54.534 read: IOPS=3234, BW=12.6MiB/s (13.2MB/s)(12.8MiB/1011msec) 00:35:54.534 slat (nsec): min=1032, max=23909k, avg=144926.01, stdev=1262772.19 00:35:54.534 clat (usec): min=2887, max=49119, avg=20066.09, stdev=7623.16 00:35:54.534 lat (usec): min=8157, max=49217, avg=20211.02, stdev=7734.79 00:35:54.534 clat percentiles (usec): 00:35:54.534 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[13304], 00:35:54.534 | 30.00th=[16450], 40.00th=[16909], 50.00th=[18744], 60.00th=[21365], 00:35:54.534 | 70.00th=[23462], 80.00th=[26870], 90.00th=[30278], 95.00th=[34866], 00:35:54.534 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[46924], 00:35:54.534 | 99.99th=[49021] 00:35:54.534 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:35:54.534 slat (nsec): min=1702, max=23006k, avg=142528.45, stdev=1134905.43 00:35:54.534 clat (usec): min=2761, max=89014, avg=17369.38, stdev=12196.91 00:35:54.534 lat (usec): min=2770, max=89023, avg=17511.91, stdev=12280.61 00:35:54.534 clat percentiles (usec): 00:35:54.534 | 1.00th=[ 7832], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10814], 00:35:54.534 | 30.00th=[11863], 40.00th=[13698], 50.00th=[15008], 60.00th=[16057], 00:35:54.534 | 70.00th=[17171], 80.00th=[19006], 90.00th=[24249], 95.00th=[34341], 00:35:54.534 | 99.00th=[81265], 99.50th=[86508], 99.90th=[88605], 99.95th=[88605], 00:35:54.534 | 99.99th=[88605] 00:35:54.534 bw ( KiB/s): min=12288, max=16384, per=18.09%, avg=14336.00, stdev=2896.31, samples=2 00:35:54.534 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:35:54.534 lat (msec) : 4=0.10%, 10=10.48%, 20=60.40%, 50=27.17%, 100=1.85% 00:35:54.534 cpu : usr=2.97%, sys=3.76%, ctx=185, majf=0, minf=1 00:35:54.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:54.534 issued rwts: total=3270,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:54.534 job2: (groupid=0, jobs=1): err= 0: pid=1226217: Thu Dec 5 13:40:16 2024 00:35:54.534 read: IOPS=4544, BW=17.8MiB/s (18.6MB/s)(17.8MiB/1003msec) 00:35:54.534 slat (nsec): min=958, max=12593k, avg=85610.78, stdev=722191.54 00:35:54.534 clat (usec): min=1474, max=43463, avg=12890.58, stdev=8784.57 00:35:54.534 lat (usec): min=1481, max=43469, avg=12976.19, stdev=8851.75 00:35:54.534 clat percentiles (usec): 00:35:54.534 | 1.00th=[ 2474], 5.00th=[ 3490], 10.00th=[ 4113], 20.00th=[ 6063], 00:35:54.534 | 30.00th=[ 6587], 40.00th=[ 6915], 50.00th=[ 8094], 60.00th=[13566], 00:35:54.534 | 70.00th=[18220], 80.00th=[22152], 90.00th=[26608], 95.00th=[28443], 00:35:54.534 | 99.00th=[33162], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:35:54.534 | 99.99th=[43254] 00:35:54.534 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:35:54.534 slat (nsec): min=1579, max=36212k, avg=103256.90, stdev=972707.93 00:35:54.534 clat (usec): min=640, max=71653, avg=14880.33, stdev=14105.02 00:35:54.534 lat (usec): min=671, max=71662, avg=14983.59, stdev=14209.64 00:35:54.534 clat percentiles (usec): 00:35:54.534 | 1.00th=[ 1713], 5.00th=[ 2966], 10.00th=[ 3818], 20.00th=[ 5669], 00:35:54.534 | 30.00th=[ 6194], 40.00th=[ 6783], 50.00th=[ 7767], 60.00th=[11994], 00:35:54.534 | 70.00th=[18220], 80.00th=[21627], 90.00th=[39584], 95.00th=[44303], 00:35:54.534 | 99.00th=[68682], 99.50th=[70779], 99.90th=[70779], 99.95th=[71828], 00:35:54.534 | 99.99th=[71828] 00:35:54.534 bw ( KiB/s): min=17976, max=18888, per=23.26%, avg=18432.00, stdev=644.88, samples=2 00:35:54.534 iops : min= 4494, max= 4722, avg=4608.00, stdev=161.22, samples=2 00:35:54.534 lat (usec) : 750=0.01%, 1000=0.03% 00:35:54.534 lat (msec) : 2=1.01%, 4=8.62%, 10=46.50%, 20=19.04%, 50=23.85% 00:35:54.534 lat (msec) : 100=0.94% 00:35:54.534 cpu : usr=3.79%, sys=5.39%, ctx=382, majf=0, minf=1 00:35:54.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:35:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:54.534 issued rwts: total=4558,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:54.534 job3: (groupid=0, jobs=1): err= 0: pid=1226218: Thu Dec 5 13:40:16 2024 00:35:54.534 read: IOPS=3475, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1011msec) 00:35:54.534 slat (nsec): min=1066, max=20741k, avg=163981.65, stdev=1275339.81 00:35:54.534 clat (usec): min=8461, max=61733, avg=21769.27, stdev=7769.84 00:35:54.534 lat (usec): min=8466, max=61742, avg=21933.25, stdev=7871.87 00:35:54.534 clat percentiles (usec): 00:35:54.534 | 1.00th=[10552], 5.00th=[11863], 10.00th=[12125], 20.00th=[16450], 00:35:54.534 | 30.00th=[17957], 40.00th=[19792], 50.00th=[20579], 60.00th=[21890], 00:35:54.534 | 70.00th=[23987], 80.00th=[26608], 90.00th=[30540], 95.00th=[35390], 00:35:54.534 | 99.00th=[52167], 99.50th=[59507], 99.90th=[61604], 99.95th=[61604], 00:35:54.534 | 99.99th=[61604] 00:35:54.534 write: IOPS=3545, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1011msec); 0 zone resets 00:35:54.534 slat (nsec): min=1665, max=14961k, avg=112873.12, stdev=927245.88 00:35:54.534 clat (usec): min=1124, max=61695, avg=14441.37, stdev=5235.44 00:35:54.534 lat (usec): min=1134, max=61697, avg=14554.24, stdev=5301.18 00:35:54.534 clat percentiles (usec): 00:35:54.534 | 1.00th=[ 7177], 5.00th=[ 7767], 10.00th=[ 9110], 20.00th=[10683], 00:35:54.534 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13566], 60.00th=[15008], 00:35:54.534 | 70.00th=[15926], 80.00th=[17171], 90.00th=[19006], 95.00th=[21890], 00:35:54.534 | 99.00th=[35914], 99.50th=[41157], 99.90th=[43779], 99.95th=[61604], 00:35:54.534 | 99.99th=[61604] 00:35:54.534 bw ( KiB/s): min=14056, max=14616, per=18.09%, avg=14336.00, stdev=395.98, samples=2 00:35:54.534 iops : min= 3514, max= 3654, avg=3584.00, stdev=98.99, samples=2 00:35:54.534 lat (msec) : 2=0.03%, 10=9.07%, 20=58.13%, 50=32.11%, 100=0.66% 00:35:54.534 cpu : usr=2.97%, sys=3.66%, ctx=194, majf=0, minf=2 00:35:54.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:35:54.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:54.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:54.534 issued rwts: total=3514,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:54.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:54.534 00:35:54.534 Run status group 0 (all jobs): 00:35:54.534 READ: bw=75.5MiB/s (79.1MB/s), 12.6MiB/s-31.7MiB/s (13.2MB/s-33.3MB/s), io=76.3MiB (80.0MB), run=1003-1011msec 00:35:54.535 WRITE: bw=77.4MiB/s (81.1MB/s), 13.8MiB/s-32.0MiB/s (14.5MB/s-33.5MB/s), io=78.2MiB (82.0MB), run=1003-1011msec 00:35:54.535 00:35:54.535 Disk stats (read/write): 00:35:54.535 nvme0n1: ios=7720/7874, merge=0/0, ticks=55016/46326, in_queue=101342, util=89.28% 00:35:54.535 nvme0n2: ios=2615/2924, merge=0/0, ticks=50224/52489, in_queue=102713, util=92.97% 00:35:54.535 nvme0n3: ios=3128/3174, merge=0/0, ticks=28926/34427, in_queue=63353, util=91.77% 00:35:54.535 nvme0n4: ios=2935/3072, merge=0/0, ticks=60003/43066, in_queue=103069, util=95.62% 00:35:54.535 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:54.535 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1226546 00:35:54.535 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:54.535 13:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:54.535 [global] 00:35:54.535 thread=1 00:35:54.535 invalidate=1 00:35:54.535 rw=read 00:35:54.535 time_based=1 00:35:54.535 runtime=10 00:35:54.535 ioengine=libaio 00:35:54.535 direct=1 00:35:54.535 bs=4096 00:35:54.535 iodepth=1 00:35:54.535 norandommap=1 00:35:54.535 numjobs=1 00:35:54.535 00:35:54.535 [job0] 00:35:54.535 filename=/dev/nvme0n1 00:35:54.535 [job1] 00:35:54.535 filename=/dev/nvme0n2 00:35:54.535 [job2] 00:35:54.535 filename=/dev/nvme0n3 00:35:54.535 [job3] 00:35:54.535 filename=/dev/nvme0n4 00:35:54.535 Could not set queue depth (nvme0n1) 00:35:54.535 Could not set queue depth (nvme0n2) 00:35:54.535 Could not set queue depth (nvme0n3) 00:35:54.535 Could not set queue depth (nvme0n4) 00:35:55.104 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:55.104 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:55.104 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:55.104 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:55.104 fio-3.35 00:35:55.104 Starting 4 threads 00:35:57.651 13:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:57.651 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9609216, buflen=4096 00:35:57.651 fio: pid=1226744, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:57.651 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:57.913 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:57.913 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:57.913 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:35:57.913 fio: pid=1226743, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:57.913 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10682368, buflen=4096 00:35:57.913 fio: pid=1226741, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:58.175 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:58.175 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:58.175 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:58.175 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:58.175 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=2617344, buflen=4096 00:35:58.175 fio: pid=1226742, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:58.435 00:35:58.435 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226741: Thu Dec 5 13:40:20 2024 00:35:58.435 read: IOPS=884, BW=3536KiB/s (3621kB/s)(10.2MiB/2950msec) 00:35:58.435 slat (usec): min=6, max=24087, avg=49.98, stdev=667.23 00:35:58.435 clat (usec): min=378, max=1778, avg=1064.40, stdev=115.70 00:35:58.435 lat (usec): min=388, max=24971, avg=1114.38, stdev=672.00 00:35:58.435 clat percentiles (usec): 00:35:58.435 | 1.00th=[ 668], 5.00th=[ 840], 10.00th=[ 922], 20.00th=[ 996], 00:35:58.435 | 30.00th=[ 1037], 40.00th=[ 1057], 50.00th=[ 1090], 60.00th=[ 1106], 00:35:58.435 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1205], 00:35:58.435 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1352], 00:35:58.435 | 99.99th=[ 1778] 00:35:58.435 bw ( KiB/s): min= 3528, max= 3640, per=50.10%, avg=3579.20, stdev=40.24, samples=5 00:35:58.435 iops : min= 882, max= 910, avg=894.80, stdev=10.06, samples=5 00:35:58.435 lat (usec) : 500=0.15%, 750=2.07%, 1000=18.97% 00:35:58.435 lat (msec) : 2=78.77% 00:35:58.435 cpu : usr=0.92%, sys=2.71%, ctx=2613, majf=0, minf=1 00:35:58.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.435 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.435 issued rwts: total=2609,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:58.435 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226742: Thu Dec 5 13:40:20 2024 00:35:58.435 read: IOPS=201, BW=807KiB/s (826kB/s)(2556KiB/3169msec) 00:35:58.435 slat (usec): min=6, max=22582, avg=87.33, stdev=1015.04 00:35:58.435 clat (usec): min=663, max=42066, avg=4830.04, stdev=11933.50 00:35:58.435 lat (usec): min=675, max=64059, avg=4907.24, stdev=12131.37 00:35:58.435 clat percentiles (usec): 00:35:58.435 | 1.00th=[ 717], 5.00th=[ 799], 10.00th=[ 832], 20.00th=[ 889], 00:35:58.435 | 30.00th=[ 938], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1057], 00:35:58.435 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[ 1254], 95.00th=[41681], 00:35:58.435 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:58.435 | 99.99th=[42206] 00:35:58.435 bw ( KiB/s): min= 89, max= 3400, per=11.84%, avg=846.83, stdev=1329.97, samples=6 00:35:58.435 iops : min= 22, max= 850, avg=211.67, stdev=332.52, samples=6 00:35:58.435 lat (usec) : 750=2.03%, 1000=42.50% 00:35:58.435 lat (msec) : 2=45.94%, 50=9.38% 00:35:58.435 cpu : usr=0.41%, sys=0.66%, ctx=643, majf=0, minf=2 00:35:58.435 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.435 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.435 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.435 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:58.435 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226743: Thu Dec 5 13:40:20 2024 00:35:58.435 read: IOPS=24, BW=95.7KiB/s (98.0kB/s)(268KiB/2799msec) 00:35:58.435 slat (usec): min=26, max=13794, avg=233.39, stdev=1669.02 00:35:58.435 clat (usec): min=4061, max=43996, avg=41225.15, stdev=4635.57 00:35:58.435 lat (usec): min=4126, max=55988, avg=41461.54, stdev=4968.00 00:35:58.435 clat percentiles (usec): 00:35:58.435 | 1.00th=[ 4047], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:58.435 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:58.435 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:58.435 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:35:58.435 | 99.99th=[43779] 00:35:58.435 bw ( KiB/s): min= 96, max= 96, per=1.34%, avg=96.00, stdev= 0.00, samples=5 00:35:58.436 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:35:58.436 lat (msec) : 10=1.47%, 50=97.06% 00:35:58.436 cpu : usr=0.14%, sys=0.00%, ctx=70, majf=0, minf=2 00:35:58.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.436 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.436 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:58.436 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1226744: Thu Dec 5 13:40:20 2024 00:35:58.436 read: IOPS=906, BW=3623KiB/s (3710kB/s)(9384KiB/2590msec) 00:35:58.436 slat (nsec): min=6111, max=61878, avg=25673.36, stdev=3297.02 00:35:58.436 clat (usec): min=257, max=1780, avg=1060.79, stdev=105.59 00:35:58.436 lat (usec): min=264, max=1805, avg=1086.46, stdev=106.01 00:35:58.436 clat percentiles (usec): 00:35:58.436 | 1.00th=[ 742], 5.00th=[ 865], 10.00th=[ 930], 20.00th=[ 996], 00:35:58.436 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:35:58.436 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:35:58.436 | 99.00th=[ 1270], 99.50th=[ 1303], 99.90th=[ 1352], 99.95th=[ 1369], 00:35:58.436 | 99.99th=[ 1778] 00:35:58.436 bw ( KiB/s): min= 3592, max= 3696, per=50.99%, avg=3643.20, stdev=37.78, samples=5 00:35:58.436 iops : min= 898, max= 924, avg=910.80, stdev= 9.44, samples=5 00:35:58.436 lat (usec) : 500=0.21%, 750=0.85%, 1000=19.56% 00:35:58.436 lat (msec) : 2=79.34% 00:35:58.436 cpu : usr=0.85%, sys=2.94%, ctx=2347, majf=0, minf=2 00:35:58.436 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.436 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.436 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.436 issued rwts: total=2347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.436 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:58.436 00:35:58.436 Run status group 0 (all jobs): 00:35:58.436 READ: bw=7144KiB/s (7316kB/s), 95.7KiB/s-3623KiB/s (98.0kB/s-3710kB/s), io=22.1MiB (23.2MB), run=2590-3169msec 00:35:58.436 00:35:58.436 Disk stats (read/write): 00:35:58.436 nvme0n1: ios=2521/0, merge=0/0, ticks=2607/0, in_queue=2607, util=92.69% 00:35:58.436 nvme0n2: ios=637/0, merge=0/0, ticks=2967/0, in_queue=2967, util=94.67% 00:35:58.436 nvme0n3: ios=62/0, merge=0/0, ticks=2553/0, in_queue=2553, util=96.03% 00:35:58.436 nvme0n4: ios=2347/0, merge=0/0, ticks=2432/0, in_queue=2432, util=95.90% 00:35:58.436 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:58.436 13:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:58.697 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:58.697 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:58.697 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:58.697 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:58.958 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:58.958 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1226546 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:59.254 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:59.254 nvmf hotplug test: fio failed as expected 00:35:59.254 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.577 rmmod nvme_tcp 00:35:59.577 rmmod nvme_fabrics 00:35:59.577 rmmod nvme_keyring 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1223378 ']' 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1223378 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1223378 ']' 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1223378 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.577 13:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1223378 00:35:59.577 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.577 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.577 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1223378' 00:35:59.577 killing process with pid 1223378 00:35:59.577 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1223378 00:35:59.577 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1223378 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:59.838 13:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:01.753 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:01.753 00:36:01.753 real 0m28.716s 00:36:01.753 user 2m20.949s 00:36:01.753 sys 0m12.621s 00:36:01.753 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.753 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:36:01.753 ************************************ 00:36:01.753 END TEST nvmf_fio_target 00:36:01.753 ************************************ 00:36:01.753 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:01.753 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:01.753 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:01.753 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:02.015 ************************************ 00:36:02.015 START TEST nvmf_bdevio 00:36:02.015 ************************************ 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:36:02.015 * Looking for test storage... 00:36:02.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.015 --rc genhtml_branch_coverage=1 00:36:02.015 --rc genhtml_function_coverage=1 00:36:02.015 --rc genhtml_legend=1 00:36:02.015 --rc geninfo_all_blocks=1 00:36:02.015 --rc geninfo_unexecuted_blocks=1 00:36:02.015 00:36:02.015 ' 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.015 --rc genhtml_branch_coverage=1 00:36:02.015 --rc genhtml_function_coverage=1 00:36:02.015 --rc genhtml_legend=1 00:36:02.015 --rc geninfo_all_blocks=1 00:36:02.015 --rc geninfo_unexecuted_blocks=1 00:36:02.015 00:36:02.015 ' 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.015 --rc genhtml_branch_coverage=1 00:36:02.015 --rc genhtml_function_coverage=1 00:36:02.015 --rc genhtml_legend=1 00:36:02.015 --rc geninfo_all_blocks=1 00:36:02.015 --rc geninfo_unexecuted_blocks=1 00:36:02.015 00:36:02.015 ' 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:02.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.015 --rc genhtml_branch_coverage=1 00:36:02.015 --rc genhtml_function_coverage=1 00:36:02.015 --rc genhtml_legend=1 00:36:02.015 --rc geninfo_all_blocks=1 00:36:02.015 --rc geninfo_unexecuted_blocks=1 00:36:02.015 00:36:02.015 ' 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.015 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.016 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:02.276 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:02.276 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:36:02.276 13:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:36:10.417 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:10.418 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:10.418 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:10.418 Found net devices under 0000:31:00.0: cvl_0_0 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:10.418 Found net devices under 0000:31:00.1: cvl_0_1 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:10.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:10.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:36:10.418 00:36:10.418 --- 10.0.0.2 ping statistics --- 00:36:10.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.418 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:10.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:10.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:36:10.418 00:36:10.418 --- 10.0.0.1 ping statistics --- 00:36:10.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:10.418 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1232348 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1232348 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1232348 ']' 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:10.418 13:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:10.678 [2024-12-05 13:40:33.007328] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:10.678 [2024-12-05 13:40:33.008317] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:36:10.678 [2024-12-05 13:40:33.008355] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:10.678 [2024-12-05 13:40:33.110714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:10.678 [2024-12-05 13:40:33.152412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:10.678 [2024-12-05 13:40:33.152460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:10.678 [2024-12-05 13:40:33.152468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:10.678 [2024-12-05 13:40:33.152476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:10.678 [2024-12-05 13:40:33.152482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:10.678 [2024-12-05 13:40:33.154332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:10.678 [2024-12-05 13:40:33.154488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:10.679 [2024-12-05 13:40:33.154644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:10.679 [2024-12-05 13:40:33.154644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:10.679 [2024-12-05 13:40:33.238314] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:10.679 [2024-12-05 13:40:33.239080] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:10.679 [2024-12-05 13:40:33.239571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:10.679 [2024-12-05 13:40:33.240086] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:10.679 [2024-12-05 13:40:33.240117] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:11.249 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:11.249 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:36:11.249 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:11.249 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.249 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 [2024-12-05 13:40:33.843456] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 Malloc0 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:11.509 [2024-12-05 13:40:33.923707] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:36:11.509 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:36:11.510 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:36:11.510 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:11.510 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:11.510 { 00:36:11.510 "params": { 00:36:11.510 "name": "Nvme$subsystem", 00:36:11.510 "trtype": "$TEST_TRANSPORT", 00:36:11.510 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.510 "adrfam": "ipv4", 00:36:11.510 "trsvcid": "$NVMF_PORT", 00:36:11.510 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.510 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.510 "hdgst": ${hdgst:-false}, 00:36:11.510 "ddgst": ${ddgst:-false} 00:36:11.510 }, 00:36:11.510 "method": "bdev_nvme_attach_controller" 00:36:11.510 } 00:36:11.510 EOF 00:36:11.510 )") 00:36:11.510 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:36:11.510 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:36:11.510 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:36:11.510 13:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:11.510 "params": { 00:36:11.510 "name": "Nvme1", 00:36:11.510 "trtype": "tcp", 00:36:11.510 "traddr": "10.0.0.2", 00:36:11.510 "adrfam": "ipv4", 00:36:11.510 "trsvcid": "4420", 00:36:11.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.510 "hdgst": false, 00:36:11.510 "ddgst": false 00:36:11.510 }, 00:36:11.510 "method": "bdev_nvme_attach_controller" 00:36:11.510 }' 00:36:11.510 [2024-12-05 13:40:33.981373] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:36:11.510 [2024-12-05 13:40:33.981431] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232482 ] 00:36:11.510 [2024-12-05 13:40:34.065699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:11.774 [2024-12-05 13:40:34.108018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.774 [2024-12-05 13:40:34.108138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:11.774 [2024-12-05 13:40:34.108141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.035 I/O targets: 00:36:12.035 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:36:12.035 00:36:12.035 00:36:12.035 CUnit - A unit testing framework for C - Version 2.1-3 00:36:12.035 http://cunit.sourceforge.net/ 00:36:12.035 00:36:12.035 00:36:12.035 Suite: bdevio tests on: Nvme1n1 00:36:12.035 Test: blockdev write read block ...passed 00:36:12.035 Test: blockdev write zeroes read block ...passed 00:36:12.035 Test: blockdev write zeroes read no split ...passed 00:36:12.035 Test: blockdev write zeroes read split ...passed 00:36:12.035 Test: blockdev write zeroes read split partial ...passed 00:36:12.035 Test: blockdev reset ...[2024-12-05 13:40:34.565135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:36:12.035 [2024-12-05 13:40:34.565198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe54b0 (9): Bad file descriptor 00:36:12.035 [2024-12-05 13:40:34.571212] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:36:12.035 passed 00:36:12.035 Test: blockdev write read 8 blocks ...passed 00:36:12.035 Test: blockdev write read size > 128k ...passed 00:36:12.035 Test: blockdev write read invalid size ...passed 00:36:12.297 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:36:12.297 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:36:12.297 Test: blockdev write read max offset ...passed 00:36:12.297 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:36:12.297 Test: blockdev writev readv 8 blocks ...passed 00:36:12.297 Test: blockdev writev readv 30 x 1block ...passed 00:36:12.297 Test: blockdev writev readv block ...passed 00:36:12.297 Test: blockdev writev readv size > 128k ...passed 00:36:12.297 Test: blockdev writev readv size > 128k in two iovs ...passed 00:36:12.297 Test: blockdev comparev and writev ...[2024-12-05 13:40:34.796710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.796736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:12.297 [2024-12-05 13:40:34.796747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.297 [2024-12-05 13:40:34.797282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.797293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:36:12.297 [2024-12-05 13:40:34.797303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.797309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:36:12.297 [2024-12-05 13:40:34.797834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.797843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:36:12.297 [2024-12-05 13:40:34.797853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.797858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:12.297 [2024-12-05 13:40:34.798407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.798416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:36:12.297 [2024-12-05 13:40:34.798425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:36:12.297 [2024-12-05 13:40:34.798431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:36:12.297 passed 00:36:12.559 Test: blockdev nvme passthru rw ...passed 00:36:12.559 Test: blockdev nvme passthru vendor specific ...[2024-12-05 13:40:34.882769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:12.559 [2024-12-05 13:40:34.882780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:12.559 [2024-12-05 13:40:34.883140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:12.559 [2024-12-05 13:40:34.883148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:12.559 [2024-12-05 13:40:34.883484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:12.559 [2024-12-05 13:40:34.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:12.559 [2024-12-05 13:40:34.883814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:36:12.559 [2024-12-05 13:40:34.883822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:36:12.559 passed 00:36:12.559 Test: blockdev nvme admin passthru ...passed 00:36:12.559 Test: blockdev copy ...passed 00:36:12.559 00:36:12.559 Run Summary: Type Total Ran Passed Failed Inactive 00:36:12.559 suites 1 1 n/a 0 0 00:36:12.559 tests 23 23 23 0 0 00:36:12.559 asserts 152 152 152 0 n/a 00:36:12.559 00:36:12.559 Elapsed time = 1.143 seconds 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.559 rmmod nvme_tcp 00:36:12.559 rmmod nvme_fabrics 00:36:12.559 rmmod nvme_keyring 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1232348 ']' 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1232348 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1232348 ']' 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1232348 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:36:12.559 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232348 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232348' 00:36:12.820 killing process with pid 1232348 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1232348 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1232348 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.820 13:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.360 13:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:15.360 00:36:15.360 real 0m13.082s 00:36:15.360 user 0m9.814s 00:36:15.360 sys 0m7.172s 00:36:15.361 13:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.361 13:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:36:15.361 ************************************ 00:36:15.361 END TEST nvmf_bdevio 00:36:15.361 ************************************ 00:36:15.361 13:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:36:15.361 00:36:15.361 real 5m9.859s 00:36:15.361 user 10m18.281s 00:36:15.361 sys 2m10.920s 00:36:15.361 13:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.361 13:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:15.361 ************************************ 00:36:15.361 END TEST nvmf_target_core_interrupt_mode 00:36:15.361 ************************************ 00:36:15.361 13:40:37 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:15.361 13:40:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:15.361 13:40:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.361 13:40:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:15.361 ************************************ 00:36:15.361 START TEST nvmf_interrupt 00:36:15.361 ************************************ 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:36:15.361 * Looking for test storage... 00:36:15.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.361 --rc genhtml_branch_coverage=1 00:36:15.361 --rc genhtml_function_coverage=1 00:36:15.361 --rc genhtml_legend=1 00:36:15.361 --rc geninfo_all_blocks=1 00:36:15.361 --rc geninfo_unexecuted_blocks=1 00:36:15.361 00:36:15.361 ' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.361 --rc genhtml_branch_coverage=1 00:36:15.361 --rc genhtml_function_coverage=1 00:36:15.361 --rc genhtml_legend=1 00:36:15.361 --rc geninfo_all_blocks=1 00:36:15.361 --rc geninfo_unexecuted_blocks=1 00:36:15.361 00:36:15.361 ' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.361 --rc genhtml_branch_coverage=1 00:36:15.361 --rc genhtml_function_coverage=1 00:36:15.361 --rc genhtml_legend=1 00:36:15.361 --rc geninfo_all_blocks=1 00:36:15.361 --rc geninfo_unexecuted_blocks=1 00:36:15.361 00:36:15.361 ' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:15.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.361 --rc genhtml_branch_coverage=1 00:36:15.361 --rc genhtml_function_coverage=1 00:36:15.361 --rc genhtml_legend=1 00:36:15.361 --rc geninfo_all_blocks=1 00:36:15.361 --rc geninfo_unexecuted_blocks=1 00:36:15.361 00:36:15.361 ' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:15.361 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.362 13:40:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:36:23.496 Found 0000:31:00.0 (0x8086 - 0x159b) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:36:23.496 Found 0000:31:00.1 (0x8086 - 0x159b) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:36:23.496 Found net devices under 0000:31:00.0: cvl_0_0 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.496 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:36:23.497 Found net devices under 0000:31:00.1: cvl_0_1 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:23.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:36:23.497 00:36:23.497 --- 10.0.0.2 ping statistics --- 00:36:23.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.497 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:23.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:36:23.497 00:36:23.497 --- 10.0.0.1 ping statistics --- 00:36:23.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.497 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:23.497 13:40:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1237502 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1237502 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1237502 ']' 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.497 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:23.757 [2024-12-05 13:40:46.093434] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:23.757 [2024-12-05 13:40:46.094422] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:36:23.757 [2024-12-05 13:40:46.094462] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.757 [2024-12-05 13:40:46.178087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:23.757 [2024-12-05 13:40:46.213022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.757 [2024-12-05 13:40:46.213052] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.757 [2024-12-05 13:40:46.213060] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.757 [2024-12-05 13:40:46.213066] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.757 [2024-12-05 13:40:46.213072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.757 [2024-12-05 13:40:46.214199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.757 [2024-12-05 13:40:46.214201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.757 [2024-12-05 13:40:46.269886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:23.757 [2024-12-05 13:40:46.270552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:23.757 [2024-12-05 13:40:46.270856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:23.757 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.757 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:36:23.757 13:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:23.757 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.757 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:36:24.017 5000+0 records in 00:36:24.017 5000+0 records out 00:36:24.017 10240000 bytes (10 MB, 9.8 MiB) copied, 0.018509 s, 553 MB/s 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.017 AIO0 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.017 [2024-12-05 13:40:46.418780] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.017 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:24.017 [2024-12-05 13:40:46.459412] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1237502 0 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1237502 0 idle 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:24.018 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237502 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0' 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237502 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.24 reactor_0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1237502 1 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1237502 1 idle 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:24.278 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1237552 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1237502 0 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1237502 0 busy 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:24.539 13:40:46 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237502 root 20 0 128.2g 44928 32256 R 86.7 0.0 0:00.37 reactor_0' 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237502 root 20 0 128.2g 44928 32256 R 86.7 0.0 0:00.37 reactor_0 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=86.7 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=86 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1237502 1 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1237502 1 busy 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:24.539 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237518 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.25 reactor_1' 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237518 root 20 0 128.2g 44928 32256 R 93.3 0.0 0:00.25 reactor_1 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:36:24.800 13:40:47 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:24.801 13:40:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1237552 00:36:34.797 Initializing NVMe Controllers 00:36:34.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:34.797 Controller IO queue size 256, less than required. 00:36:34.797 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:34.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:34.797 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:34.797 Initialization complete. Launching workers. 00:36:34.797 ======================================================== 00:36:34.797 Latency(us) 00:36:34.797 Device Information : IOPS MiB/s Average min max 00:36:34.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16115.07 62.95 15896.02 2493.81 19677.89 00:36:34.797 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 21173.43 82.71 12091.88 7490.73 27457.59 00:36:34.797 ======================================================== 00:36:34.797 Total : 37288.49 145.66 13735.93 2493.81 27457.59 00:36:34.797 00:36:34.797 13:40:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1237502 0 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1237502 0 idle 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:34.797 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237502 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.24 reactor_0' 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237502 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.24 reactor_0 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1237502 1 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1237502 1 idle 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1' 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237518 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.01 reactor_1 00:36:34.798 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:35.058 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:35.058 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:35.058 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:35.058 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:35.058 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:35.058 13:40:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:35.058 13:40:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:36:35.631 13:40:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:36:35.631 13:40:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:36:35.631 13:40:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:36:35.631 13:40:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:36:35.631 13:40:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1237502 0 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1237502 0 idle 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:36:37.542 13:40:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237502 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0' 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237502 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.48 reactor_0 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1237502 1 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1237502 1 idle 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1237502 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1237502 -w 256 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1237518 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1237518 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:36:37.803 13:41:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:36:38.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.064 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.064 rmmod nvme_tcp 00:36:38.064 rmmod nvme_fabrics 00:36:38.325 rmmod nvme_keyring 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1237502 ']' 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1237502 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1237502 ']' 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1237502 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237502 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237502' 00:36:38.325 killing process with pid 1237502 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1237502 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1237502 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:38.325 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:38.586 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:38.586 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:38.586 13:41:00 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.586 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:38.586 13:41:00 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.497 13:41:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.497 00:36:40.497 real 0m25.440s 00:36:40.497 user 0m40.344s 00:36:40.497 sys 0m9.906s 00:36:40.497 13:41:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.497 13:41:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:40.498 ************************************ 00:36:40.498 END TEST nvmf_interrupt 00:36:40.498 ************************************ 00:36:40.498 00:36:40.498 real 31m10.205s 00:36:40.498 user 62m6.973s 00:36:40.498 sys 10m55.495s 00:36:40.498 13:41:03 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.498 13:41:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:40.498 ************************************ 00:36:40.498 END TEST nvmf_tcp 00:36:40.498 ************************************ 00:36:40.498 13:41:03 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:36:40.498 13:41:03 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:40.498 13:41:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:40.498 13:41:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.498 13:41:03 -- common/autotest_common.sh@10 -- # set +x 00:36:40.759 ************************************ 00:36:40.759 START TEST spdkcli_nvmf_tcp 00:36:40.759 ************************************ 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:40.759 * Looking for test storage... 00:36:40.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:40.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.759 --rc genhtml_branch_coverage=1 00:36:40.759 --rc genhtml_function_coverage=1 00:36:40.759 --rc genhtml_legend=1 00:36:40.759 --rc geninfo_all_blocks=1 00:36:40.759 --rc geninfo_unexecuted_blocks=1 00:36:40.759 00:36:40.759 ' 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:40.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.759 --rc genhtml_branch_coverage=1 00:36:40.759 --rc genhtml_function_coverage=1 00:36:40.759 --rc genhtml_legend=1 00:36:40.759 --rc geninfo_all_blocks=1 00:36:40.759 --rc geninfo_unexecuted_blocks=1 00:36:40.759 00:36:40.759 ' 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:40.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.759 --rc genhtml_branch_coverage=1 00:36:40.759 --rc genhtml_function_coverage=1 00:36:40.759 --rc genhtml_legend=1 00:36:40.759 --rc geninfo_all_blocks=1 00:36:40.759 --rc geninfo_unexecuted_blocks=1 00:36:40.759 00:36:40.759 ' 00:36:40.759 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:40.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.759 --rc genhtml_branch_coverage=1 00:36:40.759 --rc genhtml_function_coverage=1 00:36:40.759 --rc genhtml_legend=1 00:36:40.759 --rc geninfo_all_blocks=1 00:36:40.759 --rc geninfo_unexecuted_blocks=1 00:36:40.759 00:36:40.759 ' 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:40.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:40.760 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1240798 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1240798 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1240798 ']' 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.021 13:41:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:41.021 [2024-12-05 13:41:03.390662] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:36:41.021 [2024-12-05 13:41:03.390741] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1240798 ] 00:36:41.021 [2024-12-05 13:41:03.473140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:41.021 [2024-12-05 13:41:03.516412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:41.021 [2024-12-05 13:41:03.516414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:41.963 13:41:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:41.963 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:41.963 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:41.963 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:41.963 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:41.963 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:41.963 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:41.963 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:41.963 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:41.963 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:41.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:41.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:41.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:41.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:41.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:41.964 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:41.964 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:41.964 ' 00:36:44.504 [2024-12-05 13:41:06.640972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:45.443 [2024-12-05 13:41:07.848941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:47.984 [2024-12-05 13:41:10.107745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:49.900 [2024-12-05 13:41:12.013410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:51.290 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:51.290 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:51.290 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:51.290 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:51.290 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:51.290 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:51.290 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:51.290 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:51.290 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:51.290 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:51.290 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:51.290 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:51.290 13:41:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:51.552 13:41:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.552 13:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:51.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:51.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:51.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:51.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:51.552 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:51.552 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:51.552 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:51.552 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:51.552 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:51.552 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:51.552 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:51.552 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:51.552 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:51.552 ' 00:36:56.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:56.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:56.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:56.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:56.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:56.834 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:56.834 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:56.834 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:56.834 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:56.834 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:56.834 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:56.834 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:56.834 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:56.834 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:56.834 13:41:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:56.834 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:56.834 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:56.834 13:41:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1240798 00:36:56.834 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1240798 ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1240798 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1240798 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1240798' 00:36:56.835 killing process with pid 1240798 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1240798 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1240798 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1240798 ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1240798 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1240798 ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1240798 00:36:56.835 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1240798) - No such process 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1240798 is not found' 00:36:56.835 Process with pid 1240798 is not found 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:56.835 00:36:56.835 real 0m16.278s 00:36:56.835 user 0m33.712s 00:36:56.835 sys 0m0.737s 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:56.835 13:41:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:56.835 ************************************ 00:36:56.835 END TEST spdkcli_nvmf_tcp 00:36:56.835 ************************************ 00:36:57.096 13:41:19 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:57.096 13:41:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:57.096 13:41:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.096 13:41:19 -- common/autotest_common.sh@10 -- # set +x 00:36:57.096 ************************************ 00:36:57.096 START TEST nvmf_identify_passthru 00:36:57.096 ************************************ 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:57.096 * Looking for test storage... 00:36:57.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:57.096 13:41:19 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:57.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.096 --rc genhtml_branch_coverage=1 00:36:57.096 --rc genhtml_function_coverage=1 00:36:57.096 --rc genhtml_legend=1 00:36:57.096 --rc geninfo_all_blocks=1 00:36:57.096 --rc geninfo_unexecuted_blocks=1 00:36:57.096 00:36:57.096 ' 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:57.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.096 --rc genhtml_branch_coverage=1 00:36:57.096 --rc genhtml_function_coverage=1 00:36:57.096 --rc genhtml_legend=1 00:36:57.096 --rc geninfo_all_blocks=1 00:36:57.096 --rc geninfo_unexecuted_blocks=1 00:36:57.096 00:36:57.096 ' 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:57.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.096 --rc genhtml_branch_coverage=1 00:36:57.096 --rc genhtml_function_coverage=1 00:36:57.096 --rc genhtml_legend=1 00:36:57.096 --rc geninfo_all_blocks=1 00:36:57.096 --rc geninfo_unexecuted_blocks=1 00:36:57.096 00:36:57.096 ' 00:36:57.096 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:57.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:57.096 --rc genhtml_branch_coverage=1 00:36:57.096 --rc genhtml_function_coverage=1 00:36:57.096 --rc genhtml_legend=1 00:36:57.096 --rc geninfo_all_blocks=1 00:36:57.096 --rc geninfo_unexecuted_blocks=1 00:36:57.096 00:36:57.096 ' 00:36:57.097 13:41:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:57.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:57.097 13:41:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.097 13:41:19 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:57.097 13:41:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.097 13:41:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:57.097 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:57.097 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:57.097 13:41:19 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:57.097 13:41:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:05.233 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:05.233 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:05.233 Found net devices under 0000:31:00.0: cvl_0_0 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:05.233 Found net devices under 0000:31:00.1: cvl_0_1 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:05.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:05.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.683 ms 00:37:05.233 00:37:05.233 --- 10.0.0.2 ping statistics --- 00:37:05.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.233 rtt min/avg/max/mdev = 0.683/0.683/0.683/0.000 ms 00:37:05.233 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:05.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:05.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:37:05.233 00:37:05.233 --- 10.0.0.1 ping statistics --- 00:37:05.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:05.233 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:05.234 13:41:27 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:05.234 13:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:05.234 13:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:37:05.234 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:37:05.493 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:37:05.493 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:37:05.493 13:41:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:37:05.493 13:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:37:05.493 13:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:37:05.494 13:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:05.494 13:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:37:05.494 13:41:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:37:05.754 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:37:05.754 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:37:05.754 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:37:05.754 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:37:06.325 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:37:06.325 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:06.325 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:06.325 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1248306 00:37:06.325 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:06.325 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:37:06.325 13:41:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1248306 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1248306 ']' 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:06.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:06.325 13:41:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:06.326 [2024-12-05 13:41:28.886940] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:37:06.326 [2024-12-05 13:41:28.886989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:06.586 [2024-12-05 13:41:28.971816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:06.586 [2024-12-05 13:41:29.008398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:06.587 [2024-12-05 13:41:29.008431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:06.587 [2024-12-05 13:41:29.008438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:06.587 [2024-12-05 13:41:29.008445] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:06.587 [2024-12-05 13:41:29.008451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:06.587 [2024-12-05 13:41:29.010233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:06.587 [2024-12-05 13:41:29.010348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:06.587 [2024-12-05 13:41:29.010502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:06.587 [2024-12-05 13:41:29.010502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:07.158 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.158 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:37:07.158 13:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:37:07.158 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.158 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.158 INFO: Log level set to 20 00:37:07.158 INFO: Requests: 00:37:07.158 { 00:37:07.158 "jsonrpc": "2.0", 00:37:07.158 "method": "nvmf_set_config", 00:37:07.158 "id": 1, 00:37:07.158 "params": { 00:37:07.158 "admin_cmd_passthru": { 00:37:07.158 "identify_ctrlr": true 00:37:07.158 } 00:37:07.158 } 00:37:07.158 } 00:37:07.158 00:37:07.158 INFO: response: 00:37:07.158 { 00:37:07.158 "jsonrpc": "2.0", 00:37:07.158 "id": 1, 00:37:07.158 "result": true 00:37:07.158 } 00:37:07.158 00:37:07.158 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.158 13:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:37:07.158 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.158 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.158 INFO: Setting log level to 20 00:37:07.158 INFO: Setting log level to 20 00:37:07.158 INFO: Log level set to 20 00:37:07.158 INFO: Log level set to 20 00:37:07.158 INFO: Requests: 00:37:07.158 { 00:37:07.158 "jsonrpc": "2.0", 00:37:07.158 "method": "framework_start_init", 00:37:07.158 "id": 1 00:37:07.158 } 00:37:07.158 00:37:07.158 INFO: Requests: 00:37:07.158 { 00:37:07.158 "jsonrpc": "2.0", 00:37:07.158 "method": "framework_start_init", 00:37:07.158 "id": 1 00:37:07.158 } 00:37:07.158 00:37:07.418 [2024-12-05 13:41:29.763787] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:37:07.418 INFO: response: 00:37:07.418 { 00:37:07.418 "jsonrpc": "2.0", 00:37:07.418 "id": 1, 00:37:07.418 "result": true 00:37:07.418 } 00:37:07.418 00:37:07.418 INFO: response: 00:37:07.418 { 00:37:07.418 "jsonrpc": "2.0", 00:37:07.418 "id": 1, 00:37:07.418 "result": true 00:37:07.418 } 00:37:07.418 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.418 13:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.418 INFO: Setting log level to 40 00:37:07.418 INFO: Setting log level to 40 00:37:07.418 INFO: Setting log level to 40 00:37:07.418 [2024-12-05 13:41:29.777114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.418 13:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.418 13:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.418 13:41:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.678 Nvme0n1 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.678 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.678 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.678 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.678 [2024-12-05 13:41:30.176217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.678 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:07.678 [ 00:37:07.678 { 00:37:07.678 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:37:07.678 "subtype": "Discovery", 00:37:07.678 "listen_addresses": [], 00:37:07.678 "allow_any_host": true, 00:37:07.678 "hosts": [] 00:37:07.678 }, 00:37:07.678 { 00:37:07.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:37:07.678 "subtype": "NVMe", 00:37:07.678 "listen_addresses": [ 00:37:07.678 { 00:37:07.678 "trtype": "TCP", 00:37:07.678 "adrfam": "IPv4", 00:37:07.678 "traddr": "10.0.0.2", 00:37:07.678 "trsvcid": "4420" 00:37:07.678 } 00:37:07.678 ], 00:37:07.678 "allow_any_host": true, 00:37:07.678 "hosts": [], 00:37:07.678 "serial_number": "SPDK00000000000001", 00:37:07.678 "model_number": "SPDK bdev Controller", 00:37:07.678 "max_namespaces": 1, 00:37:07.678 "min_cntlid": 1, 00:37:07.678 "max_cntlid": 65519, 00:37:07.678 "namespaces": [ 00:37:07.678 { 00:37:07.678 "nsid": 1, 00:37:07.678 "bdev_name": "Nvme0n1", 00:37:07.678 "name": "Nvme0n1", 00:37:07.678 "nguid": "3634473052605494002538450000002D", 00:37:07.678 "uuid": "36344730-5260-5494-0025-38450000002d" 00:37:07.678 } 00:37:07.678 ] 00:37:07.678 } 00:37:07.678 ] 00:37:07.678 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.678 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:07.678 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:37:07.678 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:37:07.937 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:07.937 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.937 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.209 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:37:08.209 13:41:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:08.209 rmmod nvme_tcp 00:37:08.209 rmmod nvme_fabrics 00:37:08.209 rmmod nvme_keyring 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1248306 ']' 00:37:08.209 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1248306 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1248306 ']' 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1248306 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1248306 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:08.209 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1248306' 00:37:08.209 killing process with pid 1248306 00:37:08.210 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1248306 00:37:08.210 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1248306 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:08.531 13:41:30 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:08.531 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:08.531 13:41:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.492 13:41:32 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:10.492 00:37:10.492 real 0m13.535s 00:37:10.492 user 0m9.887s 00:37:10.492 sys 0m7.079s 00:37:10.492 13:41:32 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:10.492 13:41:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:37:10.492 ************************************ 00:37:10.492 END TEST nvmf_identify_passthru 00:37:10.492 ************************************ 00:37:10.492 13:41:33 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:10.492 13:41:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:10.492 13:41:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:10.492 13:41:33 -- common/autotest_common.sh@10 -- # set +x 00:37:10.492 ************************************ 00:37:10.492 START TEST nvmf_dif 00:37:10.492 ************************************ 00:37:10.492 13:41:33 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:37:10.753 * Looking for test storage... 00:37:10.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.753 13:41:33 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.753 --rc genhtml_branch_coverage=1 00:37:10.753 --rc genhtml_function_coverage=1 00:37:10.753 --rc genhtml_legend=1 00:37:10.753 --rc geninfo_all_blocks=1 00:37:10.753 --rc geninfo_unexecuted_blocks=1 00:37:10.753 00:37:10.753 ' 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.753 --rc genhtml_branch_coverage=1 00:37:10.753 --rc genhtml_function_coverage=1 00:37:10.753 --rc genhtml_legend=1 00:37:10.753 --rc geninfo_all_blocks=1 00:37:10.753 --rc geninfo_unexecuted_blocks=1 00:37:10.753 00:37:10.753 ' 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.753 --rc genhtml_branch_coverage=1 00:37:10.753 --rc genhtml_function_coverage=1 00:37:10.753 --rc genhtml_legend=1 00:37:10.753 --rc geninfo_all_blocks=1 00:37:10.753 --rc geninfo_unexecuted_blocks=1 00:37:10.753 00:37:10.753 ' 00:37:10.753 13:41:33 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:10.753 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.753 --rc genhtml_branch_coverage=1 00:37:10.753 --rc genhtml_function_coverage=1 00:37:10.753 --rc genhtml_legend=1 00:37:10.753 --rc geninfo_all_blocks=1 00:37:10.753 --rc geninfo_unexecuted_blocks=1 00:37:10.753 00:37:10.753 ' 00:37:10.754 13:41:33 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:10.754 13:41:33 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:37:10.754 13:41:33 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:10.754 13:41:33 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:10.754 13:41:33 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:10.754 13:41:33 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.754 13:41:33 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.754 13:41:33 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.754 13:41:33 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:37:10.754 13:41:33 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:10.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:10.754 13:41:33 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:37:10.754 13:41:33 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:37:10.754 13:41:33 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:37:10.754 13:41:33 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:37:10.754 13:41:33 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:10.754 13:41:33 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:10.754 13:41:33 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:10.754 13:41:33 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:37:10.754 13:41:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:18.893 13:41:41 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:18.893 13:41:41 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:37:18.893 13:41:41 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:18.893 13:41:41 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:37:18.894 Found 0000:31:00.0 (0x8086 - 0x159b) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:37:18.894 Found 0000:31:00.1 (0x8086 - 0x159b) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:37:18.894 Found net devices under 0000:31:00.0: cvl_0_0 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:37:18.894 Found net devices under 0000:31:00.1: cvl_0_1 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:18.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:18.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:37:18.894 00:37:18.894 --- 10.0.0.2 ping statistics --- 00:37:18.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.894 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:18.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:18.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:37:18.894 00:37:18.894 --- 10.0.0.1 ping statistics --- 00:37:18.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:18.894 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:37:18.894 13:41:41 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:23.096 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:23.096 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:23.096 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:23.096 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:23.096 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:23.096 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:23.096 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:37:23.097 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:23.097 13:41:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:37:23.097 13:41:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1255181 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1255181 00:37:23.097 13:41:45 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1255181 ']' 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:23.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:23.097 13:41:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.097 [2024-12-05 13:41:45.634882] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:37:23.097 [2024-12-05 13:41:45.634930] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:23.356 [2024-12-05 13:41:45.717860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:23.356 [2024-12-05 13:41:45.752488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:23.357 [2024-12-05 13:41:45.752517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:23.357 [2024-12-05 13:41:45.752525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:23.357 [2024-12-05 13:41:45.752532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:23.357 [2024-12-05 13:41:45.752538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:23.357 [2024-12-05 13:41:45.753114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:37:23.926 13:41:46 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.926 13:41:46 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:23.926 13:41:46 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:37:23.926 13:41:46 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:23.926 [2024-12-05 13:41:46.466346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.926 13:41:46 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:23.926 13:41:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:24.187 ************************************ 00:37:24.187 START TEST fio_dif_1_default 00:37:24.187 ************************************ 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:24.187 bdev_null0 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:24.187 [2024-12-05 13:41:46.558710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:37:24.187 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:24.187 { 00:37:24.187 "params": { 00:37:24.187 "name": "Nvme$subsystem", 00:37:24.187 "trtype": "$TEST_TRANSPORT", 00:37:24.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:24.188 "adrfam": "ipv4", 00:37:24.188 "trsvcid": "$NVMF_PORT", 00:37:24.188 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:24.188 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:24.188 "hdgst": ${hdgst:-false}, 00:37:24.188 "ddgst": ${ddgst:-false} 00:37:24.188 }, 00:37:24.188 "method": "bdev_nvme_attach_controller" 00:37:24.188 } 00:37:24.188 EOF 00:37:24.188 )") 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:24.188 "params": { 00:37:24.188 "name": "Nvme0", 00:37:24.188 "trtype": "tcp", 00:37:24.188 "traddr": "10.0.0.2", 00:37:24.188 "adrfam": "ipv4", 00:37:24.188 "trsvcid": "4420", 00:37:24.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:24.188 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:24.188 "hdgst": false, 00:37:24.188 "ddgst": false 00:37:24.188 }, 00:37:24.188 "method": "bdev_nvme_attach_controller" 00:37:24.188 }' 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:24.188 13:41:46 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:24.447 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:24.447 fio-3.35 00:37:24.447 Starting 1 thread 00:37:36.681 00:37:36.681 filename0: (groupid=0, jobs=1): err= 0: pid=1255714: Thu Dec 5 13:41:57 2024 00:37:36.681 read: IOPS=96, BW=388KiB/s (397kB/s)(3888KiB/10026msec) 00:37:36.681 slat (nsec): min=5539, max=48541, avg=6593.53, stdev=2117.08 00:37:36.681 clat (usec): min=40829, max=42783, avg=41240.48, stdev=441.37 00:37:36.681 lat (usec): min=40837, max=42818, avg=41247.08, stdev=441.30 00:37:36.681 clat percentiles (usec): 00:37:36.681 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:36.681 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:36.681 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:36.681 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:36.681 | 99.99th=[42730] 00:37:36.681 bw ( KiB/s): min= 384, max= 416, per=99.80%, avg=387.20, stdev= 9.85, samples=20 00:37:36.681 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:37:36.681 lat (msec) : 50=100.00% 00:37:36.681 cpu : usr=93.47%, sys=6.33%, ctx=15, majf=0, minf=249 00:37:36.681 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:36.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:36.682 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:36.682 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:36.682 00:37:36.682 Run status group 0 (all jobs): 00:37:36.682 READ: bw=388KiB/s (397kB/s), 388KiB/s-388KiB/s (397kB/s-397kB/s), io=3888KiB (3981kB), run=10026-10026msec 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 00:37:36.682 real 0m11.326s 00:37:36.682 user 0m25.673s 00:37:36.682 sys 0m0.955s 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 ************************************ 00:37:36.682 END TEST fio_dif_1_default 00:37:36.682 ************************************ 00:37:36.682 13:41:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:37:36.682 13:41:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:36.682 13:41:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 ************************************ 00:37:36.682 START TEST fio_dif_1_multi_subsystems 00:37:36.682 ************************************ 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 bdev_null0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 [2024-12-05 13:41:57.963493] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 bdev_null1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:36.682 { 00:37:36.682 "params": { 00:37:36.682 "name": "Nvme$subsystem", 00:37:36.682 "trtype": "$TEST_TRANSPORT", 00:37:36.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.682 "adrfam": "ipv4", 00:37:36.682 "trsvcid": "$NVMF_PORT", 00:37:36.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.682 "hdgst": ${hdgst:-false}, 00:37:36.682 "ddgst": ${ddgst:-false} 00:37:36.682 }, 00:37:36.682 "method": "bdev_nvme_attach_controller" 00:37:36.682 } 00:37:36.682 EOF 00:37:36.682 )") 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:36.682 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:36.682 { 00:37:36.682 "params": { 00:37:36.682 "name": "Nvme$subsystem", 00:37:36.682 "trtype": "$TEST_TRANSPORT", 00:37:36.682 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:36.682 "adrfam": "ipv4", 00:37:36.682 "trsvcid": "$NVMF_PORT", 00:37:36.682 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:36.682 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:36.682 "hdgst": ${hdgst:-false}, 00:37:36.682 "ddgst": ${ddgst:-false} 00:37:36.682 }, 00:37:36.683 "method": "bdev_nvme_attach_controller" 00:37:36.683 } 00:37:36.683 EOF 00:37:36.683 )") 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:36.683 "params": { 00:37:36.683 "name": "Nvme0", 00:37:36.683 "trtype": "tcp", 00:37:36.683 "traddr": "10.0.0.2", 00:37:36.683 "adrfam": "ipv4", 00:37:36.683 "trsvcid": "4420", 00:37:36.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:36.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:36.683 "hdgst": false, 00:37:36.683 "ddgst": false 00:37:36.683 }, 00:37:36.683 "method": "bdev_nvme_attach_controller" 00:37:36.683 },{ 00:37:36.683 "params": { 00:37:36.683 "name": "Nvme1", 00:37:36.683 "trtype": "tcp", 00:37:36.683 "traddr": "10.0.0.2", 00:37:36.683 "adrfam": "ipv4", 00:37:36.683 "trsvcid": "4420", 00:37:36.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:36.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:36.683 "hdgst": false, 00:37:36.683 "ddgst": false 00:37:36.683 }, 00:37:36.683 "method": "bdev_nvme_attach_controller" 00:37:36.683 }' 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:36.683 13:41:58 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:36.683 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:36.683 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:37:36.683 fio-3.35 00:37:36.683 Starting 2 threads 00:37:46.679 00:37:46.679 filename0: (groupid=0, jobs=1): err= 0: pid=1257916: Thu Dec 5 13:42:09 2024 00:37:46.679 read: IOPS=168, BW=675KiB/s (691kB/s)(6752KiB/10001msec) 00:37:46.679 slat (nsec): min=5544, max=48962, avg=5832.75, stdev=1405.07 00:37:46.679 clat (usec): min=479, max=42205, avg=23680.76, stdev=20171.90 00:37:46.679 lat (usec): min=485, max=42211, avg=23686.59, stdev=20171.85 00:37:46.679 clat percentiles (usec): 00:37:46.679 | 1.00th=[ 498], 5.00th=[ 660], 10.00th=[ 725], 20.00th=[ 742], 00:37:46.679 | 30.00th=[ 766], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:37:46.679 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:37:46.679 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:37:46.679 | 99.99th=[42206] 00:37:46.679 bw ( KiB/s): min= 576, max= 768, per=64.02%, avg=678.74, stdev=53.95, samples=19 00:37:46.679 iops : min= 144, max= 192, avg=169.68, stdev=13.49, samples=19 00:37:46.679 lat (usec) : 500=1.36%, 750=22.10%, 1000=20.14% 00:37:46.679 lat (msec) : 50=56.40% 00:37:46.679 cpu : usr=95.55%, sys=4.23%, ctx=15, majf=0, minf=168 00:37:46.679 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.679 issued rwts: total=1688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.679 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:46.679 filename1: (groupid=0, jobs=1): err= 0: pid=1257917: Thu Dec 5 13:42:09 2024 00:37:46.679 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10001msec) 00:37:46.679 slat (nsec): min=5542, max=36460, avg=6616.76, stdev=1660.42 00:37:46.679 clat (usec): min=40776, max=43107, avg=41649.16, stdev=531.50 00:37:46.679 lat (usec): min=40784, max=43143, avg=41655.78, stdev=531.68 00:37:46.679 clat percentiles (usec): 00:37:46.679 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:46.679 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:37:46.679 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:37:46.679 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:37:46.679 | 99.99th=[43254] 00:37:46.679 bw ( KiB/s): min= 352, max= 416, per=36.26%, avg=384.00, stdev=15.08, samples=19 00:37:46.679 iops : min= 88, max= 104, avg=96.00, stdev= 3.77, samples=19 00:37:46.679 lat (msec) : 50=100.00% 00:37:46.679 cpu : usr=95.14%, sys=4.64%, ctx=13, majf=0, minf=98 00:37:46.679 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.679 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.679 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:46.679 00:37:46.679 Run status group 0 (all jobs): 00:37:46.679 READ: bw=1059KiB/s (1085kB/s), 384KiB/s-675KiB/s (393kB/s-691kB/s), io=10.3MiB (10.8MB), run=10001-10001msec 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.939 00:37:46.939 real 0m11.413s 00:37:46.939 user 0m34.073s 00:37:46.939 sys 0m1.247s 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:46.939 13:42:09 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:46.939 ************************************ 00:37:46.939 END TEST fio_dif_1_multi_subsystems 00:37:46.939 ************************************ 00:37:46.939 13:42:09 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:46.939 13:42:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:46.939 13:42:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:46.939 13:42:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:46.939 ************************************ 00:37:46.939 START TEST fio_dif_rand_params 00:37:46.939 ************************************ 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:46.939 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.940 bdev_null0 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:46.940 [2024-12-05 13:42:09.460568] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:46.940 { 00:37:46.940 "params": { 00:37:46.940 "name": "Nvme$subsystem", 00:37:46.940 "trtype": "$TEST_TRANSPORT", 00:37:46.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:46.940 "adrfam": "ipv4", 00:37:46.940 "trsvcid": "$NVMF_PORT", 00:37:46.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:46.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:46.940 "hdgst": ${hdgst:-false}, 00:37:46.940 "ddgst": ${ddgst:-false} 00:37:46.940 }, 00:37:46.940 "method": "bdev_nvme_attach_controller" 00:37:46.940 } 00:37:46.940 EOF 00:37:46.940 )") 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:46.940 13:42:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:46.940 "params": { 00:37:46.940 "name": "Nvme0", 00:37:46.940 "trtype": "tcp", 00:37:46.940 "traddr": "10.0.0.2", 00:37:46.940 "adrfam": "ipv4", 00:37:46.940 "trsvcid": "4420", 00:37:46.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.940 "hdgst": false, 00:37:46.940 "ddgst": false 00:37:46.940 }, 00:37:46.940 "method": "bdev_nvme_attach_controller" 00:37:46.940 }' 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:47.218 13:42:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:47.481 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:47.481 ... 00:37:47.481 fio-3.35 00:37:47.481 Starting 3 threads 00:37:54.090 00:37:54.090 filename0: (groupid=0, jobs=1): err= 0: pid=1260683: Thu Dec 5 13:42:15 2024 00:37:54.090 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(145MiB/5027msec) 00:37:54.090 slat (nsec): min=5564, max=34752, avg=8256.23, stdev=1893.67 00:37:54.090 clat (usec): min=7199, max=53147, avg=12953.26, stdev=7795.87 00:37:54.090 lat (usec): min=7205, max=53153, avg=12961.52, stdev=7796.02 00:37:54.090 clat percentiles (usec): 00:37:54.090 | 1.00th=[ 7635], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9634], 00:37:54.090 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:37:54.090 | 70.00th=[12780], 80.00th=[13435], 90.00th=[14353], 95.00th=[15533], 00:37:54.090 | 99.00th=[51643], 99.50th=[52167], 99.90th=[52167], 99.95th=[53216], 00:37:54.090 | 99.99th=[53216] 00:37:54.090 bw ( KiB/s): min=16384, max=37888, per=34.44%, avg=29696.00, stdev=6474.10, samples=10 00:37:54.090 iops : min= 128, max= 296, avg=232.00, stdev=50.58, samples=10 00:37:54.090 lat (msec) : 10=24.85%, 20=71.02%, 50=1.55%, 100=2.58% 00:37:54.090 cpu : usr=94.91%, sys=4.83%, ctx=10, majf=0, minf=95 00:37:54.090 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.090 issued rwts: total=1163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.090 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:54.090 filename0: (groupid=0, jobs=1): err= 0: pid=1260684: Thu Dec 5 13:42:15 2024 00:37:54.090 read: IOPS=215, BW=26.9MiB/s (28.2MB/s)(136MiB/5045msec) 00:37:54.090 slat (nsec): min=5605, max=51469, avg=8366.28, stdev=2391.37 00:37:54.090 clat (usec): min=5865, max=92005, avg=13872.52, stdev=8995.58 00:37:54.090 lat (usec): min=5873, max=92014, avg=13880.89, stdev=8995.40 00:37:54.090 clat percentiles (usec): 00:37:54.090 | 1.00th=[ 7308], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10159], 00:37:54.090 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11994], 60.00th=[12387], 00:37:54.090 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15270], 95.00th=[46400], 00:37:54.090 | 99.00th=[52691], 99.50th=[53216], 99.90th=[54264], 99.95th=[91751], 00:37:54.090 | 99.99th=[91751] 00:37:54.090 bw ( KiB/s): min=21034, max=33792, per=32.22%, avg=27780.20, stdev=4537.80, samples=10 00:37:54.090 iops : min= 164, max= 264, avg=217.00, stdev=35.51, samples=10 00:37:54.090 lat (msec) : 10=17.48%, 20=77.46%, 50=1.93%, 100=3.13% 00:37:54.090 cpu : usr=94.83%, sys=4.92%, ctx=10, majf=0, minf=124 00:37:54.090 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.090 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.090 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.090 issued rwts: total=1087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.090 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:54.090 filename0: (groupid=0, jobs=1): err= 0: pid=1260685: Thu Dec 5 13:42:15 2024 00:37:54.090 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(144MiB/5045msec) 00:37:54.090 slat (nsec): min=5585, max=31962, avg=8336.19, stdev=1586.97 00:37:54.090 clat (usec): min=6189, max=57232, avg=13134.48, stdev=7362.77 00:37:54.090 lat (usec): min=6198, max=57241, avg=13142.82, stdev=7362.96 00:37:54.090 clat percentiles (usec): 00:37:54.090 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9896], 00:37:54.090 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11731], 60.00th=[12518], 00:37:54.090 | 70.00th=[13173], 80.00th=[14091], 90.00th=[15270], 95.00th=[16450], 00:37:54.090 | 99.00th=[51643], 99.50th=[54264], 99.90th=[56361], 99.95th=[57410], 00:37:54.090 | 99.99th=[57410] 00:37:54.090 bw ( KiB/s): min=22528, max=32256, per=34.03%, avg=29337.60, stdev=3336.09, samples=10 00:37:54.090 iops : min= 176, max= 252, avg=229.20, stdev=26.06, samples=10 00:37:54.091 lat (msec) : 10=21.08%, 20=75.61%, 50=1.05%, 100=2.26% 00:37:54.091 cpu : usr=95.14%, sys=4.62%, ctx=7, majf=0, minf=86 00:37:54.091 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:54.091 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.091 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:54.091 issued rwts: total=1148,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:54.091 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:54.091 00:37:54.091 Run status group 0 (all jobs): 00:37:54.091 READ: bw=84.2MiB/s (88.3MB/s), 26.9MiB/s-28.9MiB/s (28.2MB/s-30.3MB/s), io=425MiB (445MB), run=5027-5045msec 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 bdev_null0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 [2024-12-05 13:42:15.654659] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 bdev_null1 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 bdev_null2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:54.091 { 00:37:54.091 "params": { 00:37:54.091 "name": "Nvme$subsystem", 00:37:54.091 "trtype": "$TEST_TRANSPORT", 00:37:54.091 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:54.091 "adrfam": "ipv4", 00:37:54.091 "trsvcid": "$NVMF_PORT", 00:37:54.091 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:54.091 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:54.091 "hdgst": ${hdgst:-false}, 00:37:54.091 "ddgst": ${ddgst:-false} 00:37:54.091 }, 00:37:54.091 "method": "bdev_nvme_attach_controller" 00:37:54.091 } 00:37:54.091 EOF 00:37:54.091 )") 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.091 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:54.092 { 00:37:54.092 "params": { 00:37:54.092 "name": "Nvme$subsystem", 00:37:54.092 "trtype": "$TEST_TRANSPORT", 00:37:54.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:54.092 "adrfam": "ipv4", 00:37:54.092 "trsvcid": "$NVMF_PORT", 00:37:54.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:54.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:54.092 "hdgst": ${hdgst:-false}, 00:37:54.092 "ddgst": ${ddgst:-false} 00:37:54.092 }, 00:37:54.092 "method": "bdev_nvme_attach_controller" 00:37:54.092 } 00:37:54.092 EOF 00:37:54.092 )") 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:54.092 { 00:37:54.092 "params": { 00:37:54.092 "name": "Nvme$subsystem", 00:37:54.092 "trtype": "$TEST_TRANSPORT", 00:37:54.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:54.092 "adrfam": "ipv4", 00:37:54.092 "trsvcid": "$NVMF_PORT", 00:37:54.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:54.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:54.092 "hdgst": ${hdgst:-false}, 00:37:54.092 "ddgst": ${ddgst:-false} 00:37:54.092 }, 00:37:54.092 "method": "bdev_nvme_attach_controller" 00:37:54.092 } 00:37:54.092 EOF 00:37:54.092 )") 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:54.092 "params": { 00:37:54.092 "name": "Nvme0", 00:37:54.092 "trtype": "tcp", 00:37:54.092 "traddr": "10.0.0.2", 00:37:54.092 "adrfam": "ipv4", 00:37:54.092 "trsvcid": "4420", 00:37:54.092 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:54.092 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:54.092 "hdgst": false, 00:37:54.092 "ddgst": false 00:37:54.092 }, 00:37:54.092 "method": "bdev_nvme_attach_controller" 00:37:54.092 },{ 00:37:54.092 "params": { 00:37:54.092 "name": "Nvme1", 00:37:54.092 "trtype": "tcp", 00:37:54.092 "traddr": "10.0.0.2", 00:37:54.092 "adrfam": "ipv4", 00:37:54.092 "trsvcid": "4420", 00:37:54.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:54.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:54.092 "hdgst": false, 00:37:54.092 "ddgst": false 00:37:54.092 }, 00:37:54.092 "method": "bdev_nvme_attach_controller" 00:37:54.092 },{ 00:37:54.092 "params": { 00:37:54.092 "name": "Nvme2", 00:37:54.092 "trtype": "tcp", 00:37:54.092 "traddr": "10.0.0.2", 00:37:54.092 "adrfam": "ipv4", 00:37:54.092 "trsvcid": "4420", 00:37:54.092 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:54.092 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:54.092 "hdgst": false, 00:37:54.092 "ddgst": false 00:37:54.092 }, 00:37:54.092 "method": "bdev_nvme_attach_controller" 00:37:54.092 }' 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:54.092 13:42:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:54.092 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:54.092 ... 00:37:54.092 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:54.092 ... 00:37:54.092 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:54.092 ... 00:37:54.092 fio-3.35 00:37:54.092 Starting 24 threads 00:38:06.346 00:38:06.346 filename0: (groupid=0, jobs=1): err= 0: pid=1262171: Thu Dec 5 13:42:27 2024 00:38:06.346 read: IOPS=478, BW=1913KiB/s (1959kB/s)(18.7MiB/10005msec) 00:38:06.346 slat (nsec): min=5733, max=59979, avg=10611.02, stdev=6732.27 00:38:06.346 clat (usec): min=12037, max=41376, avg=33370.33, stdev=2216.06 00:38:06.346 lat (usec): min=12048, max=41392, avg=33380.94, stdev=2215.98 00:38:06.346 clat percentiles (usec): 00:38:06.346 | 1.00th=[20055], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.346 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.346 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.346 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:38:06.346 | 99.99th=[41157] 00:38:06.346 bw ( KiB/s): min= 1788, max= 2180, per=4.07%, avg=1912.42, stdev=80.54, samples=19 00:38:06.346 iops : min= 447, max= 545, avg=478.11, stdev=20.14, samples=19 00:38:06.346 lat (msec) : 20=0.67%, 50=99.33% 00:38:06.346 cpu : usr=98.80%, sys=0.84%, ctx=112, majf=0, minf=22 00:38:06.346 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.346 filename0: (groupid=0, jobs=1): err= 0: pid=1262172: Thu Dec 5 13:42:27 2024 00:38:06.346 read: IOPS=475, BW=1903KiB/s (1949kB/s)(18.6MiB/10022msec) 00:38:06.346 slat (nsec): min=5723, max=62478, avg=12293.29, stdev=6955.78 00:38:06.346 clat (usec): min=15620, max=35888, avg=33523.27, stdev=1438.48 00:38:06.346 lat (usec): min=15633, max=35894, avg=33535.56, stdev=1437.99 00:38:06.346 clat percentiles (usec): 00:38:06.346 | 1.00th=[26346], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:38:06.346 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.346 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.346 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:38:06.346 | 99.99th=[35914] 00:38:06.346 bw ( KiB/s): min= 1792, max= 1920, per=4.05%, avg=1899.16, stdev=47.69, samples=19 00:38:06.346 iops : min= 448, max= 480, avg=474.79, stdev=11.92, samples=19 00:38:06.346 lat (msec) : 20=0.34%, 50=99.66% 00:38:06.346 cpu : usr=98.82%, sys=0.86%, ctx=56, majf=0, minf=18 00:38:06.346 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.346 filename0: (groupid=0, jobs=1): err= 0: pid=1262173: Thu Dec 5 13:42:27 2024 00:38:06.346 read: IOPS=483, BW=1935KiB/s (1981kB/s)(18.9MiB/10023msec) 00:38:06.346 slat (nsec): min=5687, max=68852, avg=11795.54, stdev=8012.63 00:38:06.346 clat (usec): min=2946, max=40362, avg=32978.53, stdev=4208.51 00:38:06.346 lat (usec): min=2964, max=40372, avg=32990.33, stdev=4207.58 00:38:06.346 clat percentiles (usec): 00:38:06.346 | 1.00th=[ 5211], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.346 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.346 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.346 | 99.00th=[35390], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:38:06.346 | 99.99th=[40109] 00:38:06.346 bw ( KiB/s): min= 1788, max= 2688, per=4.12%, avg=1932.75, stdev=185.41, samples=20 00:38:06.346 iops : min= 447, max= 672, avg=483.15, stdev=46.35, samples=20 00:38:06.346 lat (msec) : 4=0.43%, 10=1.55%, 20=0.33%, 50=97.69% 00:38:06.346 cpu : usr=98.74%, sys=0.94%, ctx=18, majf=0, minf=23 00:38:06.346 IO depths : 1=6.1%, 2=12.3%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.346 filename0: (groupid=0, jobs=1): err= 0: pid=1262174: Thu Dec 5 13:42:27 2024 00:38:06.346 read: IOPS=475, BW=1902KiB/s (1948kB/s)(18.6MiB/10005msec) 00:38:06.346 slat (nsec): min=5598, max=74067, avg=19527.23, stdev=11911.77 00:38:06.346 clat (usec): min=11378, max=77836, avg=33468.38, stdev=3032.53 00:38:06.346 lat (usec): min=11384, max=77856, avg=33487.90, stdev=3033.06 00:38:06.346 clat percentiles (usec): 00:38:06.346 | 1.00th=[21627], 5.00th=[31589], 10.00th=[32637], 20.00th=[32637], 00:38:06.346 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.346 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.346 | 99.00th=[43254], 99.50th=[51119], 99.90th=[61080], 99.95th=[61080], 00:38:06.346 | 99.99th=[78119] 00:38:06.346 bw ( KiB/s): min= 1788, max= 2016, per=4.03%, avg=1891.58, stdev=64.14, samples=19 00:38:06.346 iops : min= 447, max= 504, avg=472.89, stdev=16.03, samples=19 00:38:06.346 lat (msec) : 20=0.46%, 50=99.03%, 100=0.50% 00:38:06.346 cpu : usr=98.29%, sys=1.12%, ctx=225, majf=0, minf=16 00:38:06.346 IO depths : 1=5.5%, 2=11.1%, 4=22.9%, 8=53.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 complete : 0=0.0%, 4=93.6%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.346 filename0: (groupid=0, jobs=1): err= 0: pid=1262175: Thu Dec 5 13:42:27 2024 00:38:06.346 read: IOPS=463, BW=1855KiB/s (1899kB/s)(18.1MiB/10020msec) 00:38:06.346 slat (nsec): min=5701, max=94176, avg=18280.40, stdev=12705.93 00:38:06.346 clat (usec): min=19223, max=61137, avg=34350.03, stdev=4542.08 00:38:06.346 lat (usec): min=19230, max=61198, avg=34368.31, stdev=4543.48 00:38:06.346 clat percentiles (usec): 00:38:06.346 | 1.00th=[21365], 5.00th=[26870], 10.00th=[32637], 20.00th=[32900], 00:38:06.346 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[41157], 95.00th=[44827], 00:38:06.346 | 99.00th=[47973], 99.50th=[52167], 99.90th=[55313], 99.95th=[55313], 00:38:06.346 | 99.99th=[61080] 00:38:06.346 bw ( KiB/s): min= 1647, max= 2112, per=3.94%, avg=1850.70, stdev=116.50, samples=20 00:38:06.346 iops : min= 411, max= 528, avg=462.60, stdev=29.14, samples=20 00:38:06.346 lat (msec) : 20=0.41%, 50=98.64%, 100=0.95% 00:38:06.346 cpu : usr=98.78%, sys=0.86%, ctx=60, majf=0, minf=18 00:38:06.346 IO depths : 1=5.4%, 2=10.9%, 4=22.5%, 8=53.9%, 16=7.3%, 32=0.0%, >=64=0.0% 00:38:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 issued rwts: total=4646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.346 filename0: (groupid=0, jobs=1): err= 0: pid=1262176: Thu Dec 5 13:42:27 2024 00:38:06.346 read: IOPS=476, BW=1908KiB/s (1953kB/s)(18.6MiB/10006msec) 00:38:06.346 slat (nsec): min=5702, max=73672, avg=18107.34, stdev=12973.86 00:38:06.346 clat (usec): min=16779, max=53934, avg=33387.71, stdev=2712.67 00:38:06.346 lat (usec): min=16785, max=53961, avg=33405.81, stdev=2712.93 00:38:06.346 clat percentiles (usec): 00:38:06.346 | 1.00th=[21627], 5.00th=[31327], 10.00th=[32375], 20.00th=[32637], 00:38:06.346 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.346 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.346 | 99.00th=[42206], 99.50th=[50594], 99.90th=[53740], 99.95th=[53740], 00:38:06.346 | 99.99th=[53740] 00:38:06.346 bw ( KiB/s): min= 1788, max= 2011, per=4.05%, avg=1901.42, stdev=55.30, samples=19 00:38:06.346 iops : min= 447, max= 502, avg=475.16, stdev=13.87, samples=19 00:38:06.346 lat (msec) : 20=0.63%, 50=98.87%, 100=0.50% 00:38:06.346 cpu : usr=98.85%, sys=0.71%, ctx=27, majf=0, minf=24 00:38:06.346 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:38:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 issued rwts: total=4772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.346 filename0: (groupid=0, jobs=1): err= 0: pid=1262177: Thu Dec 5 13:42:27 2024 00:38:06.346 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10007msec) 00:38:06.346 slat (nsec): min=5712, max=75482, avg=11786.43, stdev=9492.06 00:38:06.346 clat (usec): min=7804, max=53998, avg=33615.11, stdev=2757.62 00:38:06.346 lat (usec): min=7809, max=54017, avg=33626.90, stdev=2757.39 00:38:06.346 clat percentiles (usec): 00:38:06.346 | 1.00th=[21890], 5.00th=[32113], 10.00th=[32637], 20.00th=[32900], 00:38:06.346 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:38:06.346 | 99.00th=[43779], 99.50th=[45876], 99.90th=[53740], 99.95th=[53740], 00:38:06.346 | 99.99th=[53740] 00:38:06.346 bw ( KiB/s): min= 1788, max= 1923, per=4.03%, avg=1892.21, stdev=51.64, samples=19 00:38:06.346 iops : min= 447, max= 480, avg=472.89, stdev=13.02, samples=19 00:38:06.346 lat (msec) : 10=0.29%, 20=0.04%, 50=99.33%, 100=0.34% 00:38:06.346 cpu : usr=99.17%, sys=0.52%, ctx=33, majf=0, minf=24 00:38:06.346 IO depths : 1=4.9%, 2=11.2%, 4=24.9%, 8=51.4%, 16=7.6%, 32=0.0%, >=64=0.0% 00:38:06.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.346 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=4750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename0: (groupid=0, jobs=1): err= 0: pid=1262178: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=477, BW=1909KiB/s (1954kB/s)(18.7MiB/10026msec) 00:38:06.347 slat (nsec): min=5751, max=93271, avg=19272.80, stdev=13285.30 00:38:06.347 clat (usec): min=12101, max=36650, avg=33361.07, stdev=2020.78 00:38:06.347 lat (usec): min=12112, max=36663, avg=33380.34, stdev=2021.15 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[20841], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.347 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.347 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.347 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:38:06.347 | 99.99th=[36439] 00:38:06.347 bw ( KiB/s): min= 1788, max= 2048, per=4.06%, avg=1905.47, stdev=58.99, samples=19 00:38:06.347 iops : min= 447, max= 512, avg=476.37, stdev=14.75, samples=19 00:38:06.347 lat (msec) : 20=0.67%, 50=99.33% 00:38:06.347 cpu : usr=98.95%, sys=0.69%, ctx=34, majf=0, minf=28 00:38:06.347 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:06.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename1: (groupid=0, jobs=1): err= 0: pid=1262179: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:38:06.347 slat (nsec): min=5723, max=89843, avg=21997.95, stdev=13692.56 00:38:06.347 clat (usec): min=19378, max=60964, avg=33578.77, stdev=1953.50 00:38:06.347 lat (usec): min=19384, max=60986, avg=33600.77, stdev=1953.23 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.347 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.347 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:38:06.347 | 99.00th=[35390], 99.50th=[35390], 99.90th=[61080], 99.95th=[61080], 00:38:06.347 | 99.99th=[61080] 00:38:06.347 bw ( KiB/s): min= 1792, max= 1920, per=4.02%, avg=1885.84, stdev=57.27, samples=19 00:38:06.347 iops : min= 448, max= 480, avg=471.42, stdev=14.38, samples=19 00:38:06.347 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:38:06.347 cpu : usr=99.10%, sys=0.60%, ctx=13, majf=0, minf=25 00:38:06.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:06.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename1: (groupid=0, jobs=1): err= 0: pid=1262180: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.4MiB/10019msec) 00:38:06.347 slat (nsec): min=5708, max=52623, avg=8092.64, stdev=3214.69 00:38:06.347 clat (usec): min=9449, max=38587, avg=27924.51, stdev=5770.21 00:38:06.347 lat (usec): min=9457, max=38594, avg=27932.61, stdev=5771.05 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[16057], 5.00th=[20317], 10.00th=[20841], 20.00th=[21890], 00:38:06.347 | 30.00th=[22938], 40.00th=[24511], 50.00th=[27919], 60.00th=[32900], 00:38:06.347 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:38:06.347 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:38:06.347 | 99.99th=[38536] 00:38:06.347 bw ( KiB/s): min= 1792, max= 2816, per=4.87%, avg=2288.21, stdev=312.10, samples=19 00:38:06.347 iops : min= 448, max= 704, avg=571.89, stdev=77.92, samples=19 00:38:06.347 lat (msec) : 10=0.16%, 20=3.84%, 50=96.00% 00:38:06.347 cpu : usr=99.06%, sys=0.64%, ctx=10, majf=0, minf=25 00:38:06.347 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:38:06.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=5726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename1: (groupid=0, jobs=1): err= 0: pid=1262181: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=480, BW=1923KiB/s (1969kB/s)(18.8MiB/10016msec) 00:38:06.347 slat (nsec): min=5464, max=77068, avg=19520.74, stdev=13714.62 00:38:06.347 clat (usec): min=19123, max=58815, avg=33104.56, stdev=3183.35 00:38:06.347 lat (usec): min=19131, max=58831, avg=33124.08, stdev=3184.17 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[21103], 5.00th=[25560], 10.00th=[32375], 20.00th=[32637], 00:38:06.347 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33817], 60.00th=[33817], 00:38:06.347 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.347 | 99.00th=[44827], 99.50th=[49021], 99.90th=[50070], 99.95th=[50070], 00:38:06.347 | 99.99th=[58983] 00:38:06.347 bw ( KiB/s): min= 1788, max= 2144, per=4.09%, avg=1919.55, stdev=79.94, samples=20 00:38:06.347 iops : min= 447, max= 536, avg=479.85, stdev=19.98, samples=20 00:38:06.347 lat (msec) : 20=0.54%, 50=99.42%, 100=0.04% 00:38:06.347 cpu : usr=98.87%, sys=0.82%, ctx=48, majf=0, minf=21 00:38:06.347 IO depths : 1=5.4%, 2=11.0%, 4=23.2%, 8=53.2%, 16=7.2%, 32=0.0%, >=64=0.0% 00:38:06.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=4816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename1: (groupid=0, jobs=1): err= 0: pid=1262182: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10012msec) 00:38:06.347 slat (nsec): min=5690, max=72282, avg=16735.42, stdev=12332.07 00:38:06.347 clat (usec): min=15905, max=53016, avg=33664.10, stdev=3335.27 00:38:06.347 lat (usec): min=15911, max=53023, avg=33680.83, stdev=3335.78 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[21890], 5.00th=[28443], 10.00th=[32637], 20.00th=[32900], 00:38:06.347 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.347 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[39060], 00:38:06.347 | 99.00th=[46924], 99.50th=[49021], 99.90th=[53216], 99.95th=[53216], 00:38:06.347 | 99.99th=[53216] 00:38:06.347 bw ( KiB/s): min= 1792, max= 2080, per=4.03%, avg=1891.25, stdev=71.17, samples=20 00:38:06.347 iops : min= 448, max= 520, avg=472.70, stdev=17.79, samples=20 00:38:06.347 lat (msec) : 20=0.38%, 50=99.22%, 100=0.40% 00:38:06.347 cpu : usr=99.11%, sys=0.59%, ctx=12, majf=0, minf=22 00:38:06.347 IO depths : 1=4.4%, 2=8.8%, 4=18.8%, 8=58.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:38:06.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 complete : 0=0.0%, 4=92.6%, 8=2.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=4737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename1: (groupid=0, jobs=1): err= 0: pid=1262183: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=477, BW=1910KiB/s (1956kB/s)(18.7MiB/10018msec) 00:38:06.347 slat (nsec): min=5706, max=67861, avg=10414.21, stdev=7597.87 00:38:06.347 clat (usec): min=15368, max=43508, avg=33415.60, stdev=2111.03 00:38:06.347 lat (usec): min=15380, max=43517, avg=33426.01, stdev=2110.67 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[21103], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:38:06.347 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.347 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:38:06.347 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:38:06.347 | 99.99th=[43254] 00:38:06.347 bw ( KiB/s): min= 1788, max= 2052, per=4.06%, avg=1905.68, stdev=59.53, samples=19 00:38:06.347 iops : min= 447, max= 513, avg=476.42, stdev=14.88, samples=19 00:38:06.347 lat (msec) : 20=0.67%, 50=99.33% 00:38:06.347 cpu : usr=98.54%, sys=0.97%, ctx=129, majf=0, minf=19 00:38:06.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:06.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=4784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename1: (groupid=0, jobs=1): err= 0: pid=1262184: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10011msec) 00:38:06.347 slat (nsec): min=5174, max=62487, avg=15552.18, stdev=9688.00 00:38:06.347 clat (usec): min=16657, max=47052, avg=33560.71, stdev=1867.53 00:38:06.347 lat (usec): min=16663, max=47069, avg=33576.26, stdev=1867.63 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[24773], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:38:06.347 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.347 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.347 | 99.00th=[35390], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:38:06.347 | 99.99th=[46924] 00:38:06.347 bw ( KiB/s): min= 1792, max= 1923, per=4.04%, avg=1895.00, stdev=52.47, samples=20 00:38:06.347 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:38:06.347 lat (msec) : 20=0.38%, 50=99.62% 00:38:06.347 cpu : usr=98.74%, sys=0.76%, ctx=167, majf=0, minf=28 00:38:06.347 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:06.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.347 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.347 filename1: (groupid=0, jobs=1): err= 0: pid=1262185: Thu Dec 5 13:42:27 2024 00:38:06.347 read: IOPS=473, BW=1895KiB/s (1941kB/s)(18.5MiB/10007msec) 00:38:06.347 slat (nsec): min=5423, max=75610, avg=20565.55, stdev=14081.42 00:38:06.347 clat (usec): min=17234, max=54864, avg=33563.30, stdev=1938.66 00:38:06.347 lat (usec): min=17248, max=54879, avg=33583.86, stdev=1938.10 00:38:06.347 clat percentiles (usec): 00:38:06.347 | 1.00th=[27132], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.347 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.347 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.347 | 99.00th=[35390], 99.50th=[46924], 99.90th=[54789], 99.95th=[54789], 00:38:06.347 | 99.99th=[54789] 00:38:06.348 bw ( KiB/s): min= 1788, max= 1923, per=4.02%, avg=1888.68, stdev=53.67, samples=19 00:38:06.348 iops : min= 447, max= 480, avg=472.05, stdev=13.44, samples=19 00:38:06.348 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:38:06.348 cpu : usr=98.46%, sys=1.00%, ctx=103, majf=0, minf=25 00:38:06.348 IO depths : 1=5.8%, 2=11.9%, 4=24.4%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename1: (groupid=0, jobs=1): err= 0: pid=1262186: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10015msec) 00:38:06.348 slat (nsec): min=5733, max=78870, avg=9364.26, stdev=6481.68 00:38:06.348 clat (usec): min=19653, max=45536, avg=33638.02, stdev=1371.90 00:38:06.348 lat (usec): min=19673, max=45557, avg=33647.39, stdev=1371.34 00:38:06.348 clat percentiles (usec): 00:38:06.348 | 1.00th=[31065], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:38:06.348 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.348 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.348 | 99.00th=[35390], 99.50th=[35914], 99.90th=[45351], 99.95th=[45351], 00:38:06.348 | 99.99th=[45351] 00:38:06.348 bw ( KiB/s): min= 1788, max= 1923, per=4.03%, avg=1892.58, stdev=54.44, samples=19 00:38:06.348 iops : min= 447, max= 480, avg=473.11, stdev=13.59, samples=19 00:38:06.348 lat (msec) : 20=0.29%, 50=99.71% 00:38:06.348 cpu : usr=99.12%, sys=0.53%, ctx=69, majf=0, minf=32 00:38:06.348 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename2: (groupid=0, jobs=1): err= 0: pid=1262187: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10003msec) 00:38:06.348 slat (nsec): min=4832, max=93838, avg=20886.48, stdev=13784.60 00:38:06.348 clat (usec): min=19499, max=61182, avg=33584.21, stdev=1962.75 00:38:06.348 lat (usec): min=19505, max=61199, avg=33605.10, stdev=1962.31 00:38:06.348 clat percentiles (usec): 00:38:06.348 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.348 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.348 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:38:06.348 | 99.00th=[35390], 99.50th=[35390], 99.90th=[61080], 99.95th=[61080], 00:38:06.348 | 99.99th=[61080] 00:38:06.348 bw ( KiB/s): min= 1792, max= 1920, per=4.02%, avg=1885.68, stdev=57.54, samples=19 00:38:06.348 iops : min= 448, max= 480, avg=471.42, stdev=14.38, samples=19 00:38:06.348 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:38:06.348 cpu : usr=98.76%, sys=0.78%, ctx=50, majf=0, minf=28 00:38:06.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename2: (groupid=0, jobs=1): err= 0: pid=1262188: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10006msec) 00:38:06.348 slat (nsec): min=5590, max=76727, avg=18348.65, stdev=14614.50 00:38:06.348 clat (usec): min=16878, max=54007, avg=33644.78, stdev=2016.91 00:38:06.348 lat (usec): min=16884, max=54023, avg=33663.13, stdev=2015.63 00:38:06.348 clat percentiles (usec): 00:38:06.348 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.348 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.348 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.348 | 99.00th=[40109], 99.50th=[45351], 99.90th=[53740], 99.95th=[53740], 00:38:06.348 | 99.99th=[54264] 00:38:06.348 bw ( KiB/s): min= 1788, max= 1923, per=4.02%, avg=1886.32, stdev=58.19, samples=19 00:38:06.348 iops : min= 447, max= 480, avg=471.42, stdev=14.63, samples=19 00:38:06.348 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:38:06.348 cpu : usr=98.96%, sys=0.72%, ctx=52, majf=0, minf=27 00:38:06.348 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename2: (groupid=0, jobs=1): err= 0: pid=1262189: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.8MiB/10030msec) 00:38:06.348 slat (nsec): min=5710, max=88168, avg=9752.37, stdev=7452.80 00:38:06.348 clat (usec): min=10782, max=35732, avg=33343.98, stdev=2341.46 00:38:06.348 lat (usec): min=10792, max=35741, avg=33353.73, stdev=2341.09 00:38:06.348 clat percentiles (usec): 00:38:06.348 | 1.00th=[19530], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.348 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.348 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.348 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:38:06.348 | 99.99th=[35914] 00:38:06.348 bw ( KiB/s): min= 1792, max= 2052, per=4.07%, avg=1912.42, stdev=52.27, samples=19 00:38:06.348 iops : min= 448, max= 513, avg=478.11, stdev=13.07, samples=19 00:38:06.348 lat (msec) : 20=1.29%, 50=98.71% 00:38:06.348 cpu : usr=98.56%, sys=1.05%, ctx=124, majf=0, minf=19 00:38:06.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename2: (groupid=0, jobs=1): err= 0: pid=1262190: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=487, BW=1952KiB/s (1998kB/s)(19.1MiB/10031msec) 00:38:06.348 slat (nsec): min=5705, max=65899, avg=12831.23, stdev=9801.77 00:38:06.348 clat (usec): min=11459, max=59149, avg=32679.55, stdev=4838.20 00:38:06.348 lat (usec): min=11480, max=59157, avg=32692.38, stdev=4839.23 00:38:06.348 clat percentiles (usec): 00:38:06.348 | 1.00th=[20841], 5.00th=[23200], 10.00th=[25560], 20.00th=[30016], 00:38:06.348 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33424], 60.00th=[33817], 00:38:06.348 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35390], 95.00th=[41157], 00:38:06.348 | 99.00th=[46400], 99.50th=[51119], 99.90th=[54264], 99.95th=[58983], 00:38:06.348 | 99.99th=[58983] 00:38:06.348 bw ( KiB/s): min= 1792, max= 2128, per=4.16%, avg=1951.11, stdev=88.87, samples=19 00:38:06.348 iops : min= 448, max= 532, avg=487.74, stdev=22.17, samples=19 00:38:06.348 lat (msec) : 20=0.98%, 50=98.32%, 100=0.69% 00:38:06.348 cpu : usr=98.59%, sys=0.97%, ctx=84, majf=0, minf=28 00:38:06.348 IO depths : 1=3.1%, 2=6.2%, 4=14.6%, 8=65.8%, 16=10.3%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=91.3%, 8=3.9%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename2: (groupid=0, jobs=1): err= 0: pid=1262191: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10005msec) 00:38:06.348 slat (nsec): min=5710, max=76775, avg=19100.70, stdev=12063.84 00:38:06.348 clat (usec): min=19704, max=61489, avg=33622.74, stdev=1966.31 00:38:06.348 lat (usec): min=19723, max=61507, avg=33641.84, stdev=1965.95 00:38:06.348 clat percentiles (usec): 00:38:06.348 | 1.00th=[31327], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:38:06.348 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.348 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.348 | 99.00th=[35390], 99.50th=[35390], 99.90th=[61604], 99.95th=[61604], 00:38:06.348 | 99.99th=[61604] 00:38:06.348 bw ( KiB/s): min= 1792, max= 1920, per=4.02%, avg=1885.68, stdev=57.54, samples=19 00:38:06.348 iops : min= 448, max= 480, avg=471.42, stdev=14.38, samples=19 00:38:06.348 lat (msec) : 20=0.30%, 50=99.37%, 100=0.34% 00:38:06.348 cpu : usr=98.88%, sys=0.83%, ctx=11, majf=0, minf=25 00:38:06.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename2: (groupid=0, jobs=1): err= 0: pid=1262192: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:38:06.348 slat (nsec): min=5725, max=73304, avg=15334.19, stdev=10934.47 00:38:06.348 clat (usec): min=19542, max=43571, avg=33595.07, stdev=1270.31 00:38:06.348 lat (usec): min=19548, max=43587, avg=33610.41, stdev=1270.30 00:38:06.348 clat percentiles (usec): 00:38:06.348 | 1.00th=[31589], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:38:06.348 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.348 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.348 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:38:06.348 | 99.99th=[43779] 00:38:06.348 bw ( KiB/s): min= 1792, max= 1920, per=4.03%, avg=1893.60, stdev=52.16, samples=20 00:38:06.348 iops : min= 448, max= 480, avg=473.40, stdev=13.04, samples=20 00:38:06.348 lat (msec) : 20=0.34%, 50=99.66% 00:38:06.348 cpu : usr=98.87%, sys=0.83%, ctx=35, majf=0, minf=20 00:38:06.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:38:06.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.348 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.348 filename2: (groupid=0, jobs=1): err= 0: pid=1262193: Thu Dec 5 13:42:27 2024 00:38:06.348 read: IOPS=479, BW=1920KiB/s (1966kB/s)(18.8MiB/10015msec) 00:38:06.348 slat (nsec): min=5725, max=57888, avg=14157.03, stdev=9071.89 00:38:06.349 clat (usec): min=5686, max=35637, avg=33218.31, stdev=3026.15 00:38:06.349 lat (usec): min=5706, max=35645, avg=33232.47, stdev=3025.78 00:38:06.349 clat percentiles (usec): 00:38:06.349 | 1.00th=[15401], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:38:06.349 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:38:06.349 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:38:06.349 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:38:06.349 | 99.99th=[35390] 00:38:06.349 bw ( KiB/s): min= 1792, max= 2352, per=4.08%, avg=1915.20, stdev=115.11, samples=20 00:38:06.349 iops : min= 448, max= 588, avg=478.80, stdev=28.78, samples=20 00:38:06.349 lat (msec) : 10=0.79%, 20=1.06%, 50=98.15% 00:38:06.349 cpu : usr=99.00%, sys=0.69%, ctx=32, majf=0, minf=22 00:38:06.349 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:38:06.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.349 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.349 issued rwts: total=4806,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.349 filename2: (groupid=0, jobs=1): err= 0: pid=1262194: Thu Dec 5 13:42:27 2024 00:38:06.349 read: IOPS=708, BW=2835KiB/s (2903kB/s)(27.8MiB/10025msec) 00:38:06.349 slat (nsec): min=3005, max=27422, avg=6522.00, stdev=1184.39 00:38:06.349 clat (usec): min=1566, max=34782, avg=22497.74, stdev=4359.06 00:38:06.349 lat (usec): min=1572, max=34788, avg=22504.26, stdev=4359.18 00:38:06.349 clat percentiles (usec): 00:38:06.349 | 1.00th=[ 2835], 5.00th=[17695], 10.00th=[19268], 20.00th=[20579], 00:38:06.349 | 30.00th=[21365], 40.00th=[21890], 50.00th=[22938], 60.00th=[23200], 00:38:06.349 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25560], 95.00th=[28967], 00:38:06.349 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34866], 99.95th=[34866], 00:38:06.349 | 99.99th=[34866] 00:38:06.349 bw ( KiB/s): min= 2682, max= 3840, per=6.05%, avg=2839.10, stdev=244.27, samples=20 00:38:06.349 iops : min= 670, max= 960, avg=709.70, stdev=61.10, samples=20 00:38:06.349 lat (msec) : 2=0.25%, 4=1.29%, 10=0.70%, 20=15.23%, 50=82.52% 00:38:06.349 cpu : usr=98.95%, sys=0.78%, ctx=17, majf=0, minf=33 00:38:06.349 IO depths : 1=3.6%, 2=7.3%, 4=17.0%, 8=63.0%, 16=9.0%, 32=0.0%, >=64=0.0% 00:38:06.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.349 complete : 0=0.0%, 4=91.8%, 8=2.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:06.349 issued rwts: total=7106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:06.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:38:06.349 00:38:06.349 Run status group 0 (all jobs): 00:38:06.349 READ: bw=45.8MiB/s (48.1MB/s), 1855KiB/s-2835KiB/s (1899kB/s-2903kB/s), io=460MiB (482MB), run=10002-10031msec 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 bdev_null0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 [2024-12-05 13:42:27.497576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 bdev_null1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.349 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:06.350 { 00:38:06.350 "params": { 00:38:06.350 "name": "Nvme$subsystem", 00:38:06.350 "trtype": "$TEST_TRANSPORT", 00:38:06.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.350 "adrfam": "ipv4", 00:38:06.350 "trsvcid": "$NVMF_PORT", 00:38:06.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.350 "hdgst": ${hdgst:-false}, 00:38:06.350 "ddgst": ${ddgst:-false} 00:38:06.350 }, 00:38:06.350 "method": "bdev_nvme_attach_controller" 00:38:06.350 } 00:38:06.350 EOF 00:38:06.350 )") 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:06.350 { 00:38:06.350 "params": { 00:38:06.350 "name": "Nvme$subsystem", 00:38:06.350 "trtype": "$TEST_TRANSPORT", 00:38:06.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:06.350 "adrfam": "ipv4", 00:38:06.350 "trsvcid": "$NVMF_PORT", 00:38:06.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:06.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:06.350 "hdgst": ${hdgst:-false}, 00:38:06.350 "ddgst": ${ddgst:-false} 00:38:06.350 }, 00:38:06.350 "method": "bdev_nvme_attach_controller" 00:38:06.350 } 00:38:06.350 EOF 00:38:06.350 )") 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:06.350 "params": { 00:38:06.350 "name": "Nvme0", 00:38:06.350 "trtype": "tcp", 00:38:06.350 "traddr": "10.0.0.2", 00:38:06.350 "adrfam": "ipv4", 00:38:06.350 "trsvcid": "4420", 00:38:06.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:06.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:06.350 "hdgst": false, 00:38:06.350 "ddgst": false 00:38:06.350 }, 00:38:06.350 "method": "bdev_nvme_attach_controller" 00:38:06.350 },{ 00:38:06.350 "params": { 00:38:06.350 "name": "Nvme1", 00:38:06.350 "trtype": "tcp", 00:38:06.350 "traddr": "10.0.0.2", 00:38:06.350 "adrfam": "ipv4", 00:38:06.350 "trsvcid": "4420", 00:38:06.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:06.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:06.350 "hdgst": false, 00:38:06.350 "ddgst": false 00:38:06.350 }, 00:38:06.350 "method": "bdev_nvme_attach_controller" 00:38:06.350 }' 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:06.350 13:42:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:06.350 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:06.350 ... 00:38:06.350 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:38:06.350 ... 00:38:06.350 fio-3.35 00:38:06.350 Starting 4 threads 00:38:11.652 00:38:11.652 filename0: (groupid=0, jobs=1): err= 0: pid=1264555: Thu Dec 5 13:42:33 2024 00:38:11.652 read: IOPS=2032, BW=15.9MiB/s (16.6MB/s)(79.4MiB/5002msec) 00:38:11.652 slat (nsec): min=5531, max=81991, avg=8486.67, stdev=3535.90 00:38:11.652 clat (usec): min=2084, max=9455, avg=3915.86, stdev=508.16 00:38:11.653 lat (usec): min=2089, max=9480, avg=3924.35, stdev=508.06 00:38:11.653 clat percentiles (usec): 00:38:11.653 | 1.00th=[ 2900], 5.00th=[ 3392], 10.00th=[ 3523], 20.00th=[ 3621], 00:38:11.653 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:38:11.653 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4883], 00:38:11.653 | 99.00th=[ 5866], 99.50th=[ 6063], 99.90th=[ 6915], 99.95th=[ 9372], 00:38:11.653 | 99.99th=[ 9503] 00:38:11.653 bw ( KiB/s): min=15792, max=16560, per=24.55%, avg=16257.60, stdev=248.15, samples=10 00:38:11.653 iops : min= 1974, max= 2070, avg=2032.20, stdev=31.02, samples=10 00:38:11.653 lat (msec) : 4=75.84%, 10=24.16% 00:38:11.653 cpu : usr=97.32%, sys=2.42%, ctx=7, majf=0, minf=122 00:38:11.653 IO depths : 1=0.1%, 2=0.1%, 4=67.6%, 8=32.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 complete : 0=0.0%, 4=96.3%, 8=3.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 issued rwts: total=10166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:11.653 filename0: (groupid=0, jobs=1): err= 0: pid=1264557: Thu Dec 5 13:42:33 2024 00:38:11.653 read: IOPS=2229, BW=17.4MiB/s (18.3MB/s)(87.1MiB/5001msec) 00:38:11.653 slat (nsec): min=5533, max=80979, avg=8596.17, stdev=3016.14 00:38:11.653 clat (usec): min=1224, max=6614, avg=3564.30, stdev=540.42 00:38:11.653 lat (usec): min=1230, max=6622, avg=3572.90, stdev=540.59 00:38:11.653 clat percentiles (usec): 00:38:11.653 | 1.00th=[ 2442], 5.00th=[ 2802], 10.00th=[ 2900], 20.00th=[ 3130], 00:38:11.653 | 30.00th=[ 3261], 40.00th=[ 3425], 50.00th=[ 3589], 60.00th=[ 3752], 00:38:11.653 | 70.00th=[ 3818], 80.00th=[ 3851], 90.00th=[ 4113], 95.00th=[ 4752], 00:38:11.653 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5735], 99.95th=[ 6063], 00:38:11.653 | 99.99th=[ 6587] 00:38:11.653 bw ( KiB/s): min=16800, max=18512, per=26.93%, avg=17832.30, stdev=548.90, samples=10 00:38:11.653 iops : min= 2100, max= 2314, avg=2229.00, stdev=68.61, samples=10 00:38:11.653 lat (msec) : 2=0.19%, 4=88.93%, 10=10.88% 00:38:11.653 cpu : usr=97.06%, sys=2.66%, ctx=7, majf=0, minf=78 00:38:11.653 IO depths : 1=0.1%, 2=2.8%, 4=67.7%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 issued rwts: total=11151,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:11.653 filename1: (groupid=0, jobs=1): err= 0: pid=1264558: Thu Dec 5 13:42:33 2024 00:38:11.653 read: IOPS=2003, BW=15.6MiB/s (16.4MB/s)(78.3MiB/5001msec) 00:38:11.653 slat (nsec): min=5535, max=70118, avg=8459.36, stdev=2986.61 00:38:11.653 clat (usec): min=1363, max=7703, avg=3970.65, stdev=557.62 00:38:11.653 lat (usec): min=1369, max=7709, avg=3979.11, stdev=557.45 00:38:11.653 clat percentiles (usec): 00:38:11.653 | 1.00th=[ 3130], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3654], 00:38:11.653 | 30.00th=[ 3752], 40.00th=[ 3818], 50.00th=[ 3851], 60.00th=[ 3851], 00:38:11.653 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4490], 95.00th=[ 5538], 00:38:11.653 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 6456], 99.95th=[ 6849], 00:38:11.653 | 99.99th=[ 7701] 00:38:11.653 bw ( KiB/s): min=15664, max=16592, per=24.19%, avg=16019.10, stdev=298.41, samples=10 00:38:11.653 iops : min= 1958, max= 2074, avg=2002.30, stdev=37.35, samples=10 00:38:11.653 lat (msec) : 2=0.03%, 4=73.13%, 10=26.84% 00:38:11.653 cpu : usr=96.92%, sys=2.82%, ctx=7, majf=0, minf=67 00:38:11.653 IO depths : 1=0.1%, 2=0.1%, 4=73.2%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 issued rwts: total=10018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:11.653 filename1: (groupid=0, jobs=1): err= 0: pid=1264559: Thu Dec 5 13:42:33 2024 00:38:11.653 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5002msec) 00:38:11.653 slat (usec): min=5, max=350, avg= 8.40, stdev= 5.02 00:38:11.653 clat (usec): min=1372, max=6940, avg=3950.01, stdev=634.55 00:38:11.653 lat (usec): min=1389, max=6948, avg=3958.41, stdev=634.22 00:38:11.653 clat percentiles (usec): 00:38:11.653 | 1.00th=[ 2802], 5.00th=[ 3228], 10.00th=[ 3458], 20.00th=[ 3621], 00:38:11.653 | 30.00th=[ 3720], 40.00th=[ 3785], 50.00th=[ 3818], 60.00th=[ 3851], 00:38:11.653 | 70.00th=[ 3916], 80.00th=[ 4146], 90.00th=[ 4621], 95.00th=[ 5538], 00:38:11.653 | 99.00th=[ 6063], 99.50th=[ 6259], 99.90th=[ 6456], 99.95th=[ 6718], 00:38:11.653 | 99.99th=[ 6915] 00:38:11.653 bw ( KiB/s): min=15824, max=16464, per=24.32%, avg=16108.80, stdev=211.03, samples=10 00:38:11.653 iops : min= 1978, max= 2058, avg=2013.60, stdev=26.38, samples=10 00:38:11.653 lat (msec) : 2=0.53%, 4=72.57%, 10=26.90% 00:38:11.653 cpu : usr=96.96%, sys=2.78%, ctx=7, majf=0, minf=67 00:38:11.653 IO depths : 1=0.1%, 2=0.2%, 4=72.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:11.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:11.653 issued rwts: total=10071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:11.653 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:11.653 00:38:11.653 Run status group 0 (all jobs): 00:38:11.653 READ: bw=64.7MiB/s (67.8MB/s), 15.6MiB/s-17.4MiB/s (16.4MB/s-18.3MB/s), io=323MiB (339MB), run=5001-5002msec 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 00:38:11.653 real 0m24.398s 00:38:11.653 user 5m17.441s 00:38:11.653 sys 0m4.307s 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 ************************************ 00:38:11.653 END TEST fio_dif_rand_params 00:38:11.653 ************************************ 00:38:11.653 13:42:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:38:11.653 13:42:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:11.653 13:42:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 ************************************ 00:38:11.653 START TEST fio_dif_digest 00:38:11.653 ************************************ 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 bdev_null0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:11.653 [2024-12-05 13:42:33.939256] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:11.653 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:11.653 { 00:38:11.653 "params": { 00:38:11.653 "name": "Nvme$subsystem", 00:38:11.653 "trtype": "$TEST_TRANSPORT", 00:38:11.653 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:11.653 "adrfam": "ipv4", 00:38:11.653 "trsvcid": "$NVMF_PORT", 00:38:11.653 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:11.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:11.654 "hdgst": ${hdgst:-false}, 00:38:11.654 "ddgst": ${ddgst:-false} 00:38:11.654 }, 00:38:11.654 "method": "bdev_nvme_attach_controller" 00:38:11.654 } 00:38:11.654 EOF 00:38:11.654 )") 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:11.654 "params": { 00:38:11.654 "name": "Nvme0", 00:38:11.654 "trtype": "tcp", 00:38:11.654 "traddr": "10.0.0.2", 00:38:11.654 "adrfam": "ipv4", 00:38:11.654 "trsvcid": "4420", 00:38:11.654 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:11.654 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:11.654 "hdgst": true, 00:38:11.654 "ddgst": true 00:38:11.654 }, 00:38:11.654 "method": "bdev_nvme_attach_controller" 00:38:11.654 }' 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:38:11.654 13:42:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:11.654 13:42:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:38:11.654 13:42:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:38:11.654 13:42:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:38:11.654 13:42:34 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:38:11.914 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:38:11.914 ... 00:38:11.914 fio-3.35 00:38:11.914 Starting 3 threads 00:38:24.140 00:38:24.140 filename0: (groupid=0, jobs=1): err= 0: pid=1265886: Thu Dec 5 13:42:45 2024 00:38:24.140 read: IOPS=208, BW=26.1MiB/s (27.4MB/s)(262MiB/10049msec) 00:38:24.141 slat (nsec): min=5811, max=60679, avg=8205.90, stdev=2075.66 00:38:24.141 clat (usec): min=11049, max=50492, avg=14337.73, stdev=1460.97 00:38:24.141 lat (usec): min=11055, max=50499, avg=14345.94, stdev=1460.85 00:38:24.141 clat percentiles (usec): 00:38:24.141 | 1.00th=[12256], 5.00th=[12780], 10.00th=[13173], 20.00th=[13435], 00:38:24.141 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:38:24.141 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[15926], 00:38:24.141 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18482], 99.95th=[49021], 00:38:24.141 | 99.99th=[50594] 00:38:24.141 bw ( KiB/s): min=26112, max=27392, per=32.64%, avg=26828.80, stdev=367.71, samples=20 00:38:24.141 iops : min= 204, max= 214, avg=209.60, stdev= 2.87, samples=20 00:38:24.141 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:38:24.141 cpu : usr=93.85%, sys=5.64%, ctx=446, majf=0, minf=162 00:38:24.141 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.141 issued rwts: total=2098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:24.141 filename0: (groupid=0, jobs=1): err= 0: pid=1265887: Thu Dec 5 13:42:45 2024 00:38:24.141 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10045msec) 00:38:24.141 slat (nsec): min=5929, max=40176, avg=8114.02, stdev=1860.48 00:38:24.141 clat (usec): min=11969, max=53576, avg=15149.31, stdev=1594.75 00:38:24.141 lat (usec): min=11976, max=53582, avg=15157.42, stdev=1594.63 00:38:24.141 clat percentiles (usec): 00:38:24.141 | 1.00th=[12780], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:38:24.141 | 30.00th=[14615], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:38:24.141 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16581], 95.00th=[16909], 00:38:24.141 | 99.00th=[17957], 99.50th=[18220], 99.90th=[51119], 99.95th=[53740], 00:38:24.141 | 99.99th=[53740] 00:38:24.141 bw ( KiB/s): min=24320, max=26112, per=30.92%, avg=25411.37, stdev=450.27, samples=19 00:38:24.141 iops : min= 190, max= 204, avg=198.53, stdev= 3.52, samples=19 00:38:24.141 lat (msec) : 20=99.90%, 100=0.10% 00:38:24.141 cpu : usr=95.01%, sys=4.74%, ctx=23, majf=0, minf=134 00:38:24.141 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.141 issued rwts: total=1985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:24.141 filename0: (groupid=0, jobs=1): err= 0: pid=1265888: Thu Dec 5 13:42:45 2024 00:38:24.141 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(296MiB/10006msec) 00:38:24.141 slat (nsec): min=5961, max=40095, avg=8483.16, stdev=2182.45 00:38:24.141 clat (usec): min=6816, max=16034, avg=12660.66, stdev=893.01 00:38:24.141 lat (usec): min=6822, max=16041, avg=12669.14, stdev=893.31 00:38:24.141 clat percentiles (usec): 00:38:24.141 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11469], 20.00th=[11863], 00:38:24.141 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:38:24.141 | 70.00th=[13173], 80.00th=[13435], 90.00th=[13698], 95.00th=[14091], 00:38:24.141 | 99.00th=[14615], 99.50th=[14877], 99.90th=[15270], 99.95th=[15270], 00:38:24.141 | 99.99th=[16057] 00:38:24.141 bw ( KiB/s): min=29184, max=31744, per=36.84%, avg=30275.37, stdev=708.02, samples=19 00:38:24.141 iops : min= 228, max= 248, avg=236.53, stdev= 5.53, samples=19 00:38:24.141 lat (msec) : 10=0.34%, 20=99.66% 00:38:24.141 cpu : usr=94.36%, sys=4.62%, ctx=255, majf=0, minf=129 00:38:24.141 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:24.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.141 issued rwts: total=2369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.141 latency : target=0, window=0, percentile=100.00%, depth=3 00:38:24.141 00:38:24.141 Run status group 0 (all jobs): 00:38:24.141 READ: bw=80.3MiB/s (84.2MB/s), 24.7MiB/s-29.6MiB/s (25.9MB/s-31.0MB/s), io=807MiB (846MB), run=10006-10049msec 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:24.141 00:38:24.141 real 0m11.349s 00:38:24.141 user 0m42.576s 00:38:24.141 sys 0m1.875s 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:24.141 13:42:45 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:38:24.141 ************************************ 00:38:24.141 END TEST fio_dif_digest 00:38:24.141 ************************************ 00:38:24.141 13:42:45 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:38:24.141 13:42:45 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.141 rmmod nvme_tcp 00:38:24.141 rmmod nvme_fabrics 00:38:24.141 rmmod nvme_keyring 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1255181 ']' 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1255181 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1255181 ']' 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1255181 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1255181 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1255181' 00:38:24.141 killing process with pid 1255181 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1255181 00:38:24.141 13:42:45 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1255181 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:24.141 13:42:45 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:26.690 Waiting for block devices as requested 00:38:26.690 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:26.690 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:26.951 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:26.951 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:26.951 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:27.212 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:27.212 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:27.212 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:27.213 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:27.474 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:27.474 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:27.734 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:27.734 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:27.734 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:27.994 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:27.994 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:27.994 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:28.256 13:42:50 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:28.256 13:42:50 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:28.256 13:42:50 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.803 13:42:52 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:30.803 00:38:30.803 real 1m19.784s 00:38:30.803 user 8m2.759s 00:38:30.803 sys 0m22.511s 00:38:30.803 13:42:52 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:30.803 13:42:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:38:30.803 ************************************ 00:38:30.803 END TEST nvmf_dif 00:38:30.803 ************************************ 00:38:30.803 13:42:52 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:30.803 13:42:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:30.803 13:42:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:30.803 13:42:52 -- common/autotest_common.sh@10 -- # set +x 00:38:30.803 ************************************ 00:38:30.803 START TEST nvmf_abort_qd_sizes 00:38:30.803 ************************************ 00:38:30.803 13:42:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:38:30.803 * Looking for test storage... 00:38:30.803 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.803 --rc genhtml_branch_coverage=1 00:38:30.803 --rc genhtml_function_coverage=1 00:38:30.803 --rc genhtml_legend=1 00:38:30.803 --rc geninfo_all_blocks=1 00:38:30.803 --rc geninfo_unexecuted_blocks=1 00:38:30.803 00:38:30.803 ' 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.803 --rc genhtml_branch_coverage=1 00:38:30.803 --rc genhtml_function_coverage=1 00:38:30.803 --rc genhtml_legend=1 00:38:30.803 --rc geninfo_all_blocks=1 00:38:30.803 --rc geninfo_unexecuted_blocks=1 00:38:30.803 00:38:30.803 ' 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.803 --rc genhtml_branch_coverage=1 00:38:30.803 --rc genhtml_function_coverage=1 00:38:30.803 --rc genhtml_legend=1 00:38:30.803 --rc geninfo_all_blocks=1 00:38:30.803 --rc geninfo_unexecuted_blocks=1 00:38:30.803 00:38:30.803 ' 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:30.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:30.803 --rc genhtml_branch_coverage=1 00:38:30.803 --rc genhtml_function_coverage=1 00:38:30.803 --rc genhtml_legend=1 00:38:30.803 --rc geninfo_all_blocks=1 00:38:30.803 --rc geninfo_unexecuted_blocks=1 00:38:30.803 00:38:30.803 ' 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:30.803 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:30.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:38:30.804 13:42:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:38:38.942 Found 0000:31:00.0 (0x8086 - 0x159b) 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:38.942 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:38:38.943 Found 0000:31:00.1 (0x8086 - 0x159b) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:38:38.943 Found net devices under 0000:31:00.0: cvl_0_0 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:38:38.943 Found net devices under 0000:31:00.1: cvl_0_1 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:38.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:38.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.583 ms 00:38:38.943 00:38:38.943 --- 10.0.0.2 ping statistics --- 00:38:38.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:38.943 rtt min/avg/max/mdev = 0.583/0.583/0.583/0.000 ms 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:38.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:38.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:38:38.943 00:38:38.943 --- 10.0.0.1 ping statistics --- 00:38:38.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:38.943 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:38.943 13:43:01 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:43.177 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:43.177 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1276237 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1276237 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1276237 ']' 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:43.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:43.177 13:43:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:43.177 [2024-12-05 13:43:05.555330] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:38:43.177 [2024-12-05 13:43:05.555389] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:43.177 [2024-12-05 13:43:05.646108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:43.177 [2024-12-05 13:43:05.689078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:43.177 [2024-12-05 13:43:05.689114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:43.177 [2024-12-05 13:43:05.689123] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:43.177 [2024-12-05 13:43:05.689130] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:43.177 [2024-12-05 13:43:05.689136] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:43.177 [2024-12-05 13:43:05.690927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:43.177 [2024-12-05 13:43:05.691134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.177 [2024-12-05 13:43:05.691134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:43.177 [2024-12-05 13:43:05.690985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:43.896 13:43:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:44.172 ************************************ 00:38:44.172 START TEST spdk_target_abort 00:38:44.172 ************************************ 00:38:44.172 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:38:44.172 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:44.172 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:44.172 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.172 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:44.433 spdk_targetn1 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:44.433 [2024-12-05 13:43:06.759838] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:44.433 [2024-12-05 13:43:06.816164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:44.433 13:43:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:44.696 [2024-12-05 13:43:07.013898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:608 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:44.696 [2024-12-05 13:43:07.013926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:004d p:1 m:0 dnr:0 00:38:44.696 [2024-12-05 13:43:07.021355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:848 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:44.696 [2024-12-05 13:43:07.021372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:006c p:1 m:0 dnr:0 00:38:44.696 [2024-12-05 13:43:07.037405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1456 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:44.696 [2024-12-05 13:43:07.037422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00b7 p:1 m:0 dnr:0 00:38:44.696 [2024-12-05 13:43:07.045340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1784 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:44.696 [2024-12-05 13:43:07.045355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e0 p:1 m:0 dnr:0 00:38:44.696 [2024-12-05 13:43:07.088732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3504 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:38:44.696 [2024-12-05 13:43:07.088750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00b8 p:0 m:0 dnr:0 00:38:44.696 [2024-12-05 13:43:07.093349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3568 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:44.696 [2024-12-05 13:43:07.093362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00c0 p:0 m:0 dnr:0 00:38:47.996 Initializing NVMe Controllers 00:38:47.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:47.996 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:47.996 Initialization complete. Launching workers. 00:38:47.996 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12302, failed: 6 00:38:47.996 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3136, failed to submit 9172 00:38:47.996 success 736, unsuccessful 2400, failed 0 00:38:47.996 13:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:47.996 13:43:10 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:47.996 [2024-12-05 13:43:10.263942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:190 nsid:1 lba:1152 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:38:47.996 [2024-12-05 13:43:10.263990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:190 cdw0:0 sqhd:009c p:1 m:0 dnr:0 00:38:47.996 [2024-12-05 13:43:10.296112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:168 nsid:1 lba:1904 len:8 PRP1 0x200004e44000 PRP2 0x0 00:38:47.996 [2024-12-05 13:43:10.296137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:168 cdw0:0 sqhd:0000 p:1 m:0 dnr:0 00:38:48.568 [2024-12-05 13:43:10.974013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:17472 len:8 PRP1 0x200004e40000 PRP2 0x0 00:38:48.568 [2024-12-05 13:43:10.974044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:0090 p:1 m:0 dnr:0 00:38:51.112 Initializing NVMe Controllers 00:38:51.112 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:51.112 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:51.113 Initialization complete. Launching workers. 00:38:51.113 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8414, failed: 3 00:38:51.113 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 7177 00:38:51.113 success 320, unsuccessful 920, failed 0 00:38:51.113 13:43:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:51.113 13:43:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:54.417 Initializing NVMe Controllers 00:38:54.417 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:54.417 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:54.417 Initialization complete. Launching workers. 00:38:54.417 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41810, failed: 0 00:38:54.417 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2882, failed to submit 38928 00:38:54.417 success 594, unsuccessful 2288, failed 0 00:38:54.417 13:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:54.417 13:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.417 13:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:54.417 13:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:54.417 13:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:54.417 13:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:54.417 13:43:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1276237 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1276237 ']' 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1276237 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1276237 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1276237' 00:38:56.335 killing process with pid 1276237 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1276237 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1276237 00:38:56.335 00:38:56.335 real 0m12.168s 00:38:56.335 user 0m49.609s 00:38:56.335 sys 0m1.897s 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:56.335 ************************************ 00:38:56.335 END TEST spdk_target_abort 00:38:56.335 ************************************ 00:38:56.335 13:43:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:56.335 13:43:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:56.335 13:43:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:56.335 13:43:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:56.335 ************************************ 00:38:56.335 START TEST kernel_target_abort 00:38:56.335 ************************************ 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:56.335 13:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:00.541 Waiting for block devices as requested 00:39:00.541 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:00.541 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:00.801 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:00.801 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:01.061 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:01.061 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:01.061 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:01.061 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:01.321 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:01.321 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:01.581 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:39:01.581 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:39:01.581 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:39:01.581 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:39:01.581 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:39:01.581 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:39:01.581 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:39:01.582 No valid GPT data, bailing 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:01.582 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:39:01.843 00:39:01.843 Discovery Log Number of Records 2, Generation counter 2 00:39:01.843 =====Discovery Log Entry 0====== 00:39:01.843 trtype: tcp 00:39:01.843 adrfam: ipv4 00:39:01.843 subtype: current discovery subsystem 00:39:01.843 treq: not specified, sq flow control disable supported 00:39:01.843 portid: 1 00:39:01.843 trsvcid: 4420 00:39:01.843 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:39:01.843 traddr: 10.0.0.1 00:39:01.843 eflags: none 00:39:01.843 sectype: none 00:39:01.843 =====Discovery Log Entry 1====== 00:39:01.843 trtype: tcp 00:39:01.843 adrfam: ipv4 00:39:01.843 subtype: nvme subsystem 00:39:01.843 treq: not specified, sq flow control disable supported 00:39:01.843 portid: 1 00:39:01.843 trsvcid: 4420 00:39:01.843 subnqn: nqn.2016-06.io.spdk:testnqn 00:39:01.843 traddr: 10.0.0.1 00:39:01.843 eflags: none 00:39:01.843 sectype: none 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:01.843 13:43:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:05.140 Initializing NVMe Controllers 00:39:05.140 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:05.140 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:05.140 Initialization complete. Launching workers. 00:39:05.140 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66787, failed: 0 00:39:05.140 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 66787, failed to submit 0 00:39:05.140 success 0, unsuccessful 66787, failed 0 00:39:05.140 13:43:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:05.140 13:43:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:08.452 Initializing NVMe Controllers 00:39:08.452 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:08.452 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:08.452 Initialization complete. Launching workers. 00:39:08.452 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 107634, failed: 0 00:39:08.452 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27146, failed to submit 80488 00:39:08.452 success 0, unsuccessful 27146, failed 0 00:39:08.452 13:43:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:39:08.452 13:43:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:39:11.752 Initializing NVMe Controllers 00:39:11.752 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:39:11.752 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:39:11.752 Initialization complete. Launching workers. 00:39:11.752 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101631, failed: 0 00:39:11.752 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25414, failed to submit 76217 00:39:11.752 success 0, unsuccessful 25414, failed 0 00:39:11.752 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:39:11.752 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:39:11.753 13:43:33 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:39:15.053 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:39:15.053 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:39:16.968 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:39:17.229 00:39:17.229 real 0m20.883s 00:39:17.229 user 0m10.115s 00:39:17.229 sys 0m6.510s 00:39:17.229 13:43:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:17.229 13:43:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:39:17.229 ************************************ 00:39:17.229 END TEST kernel_target_abort 00:39:17.229 ************************************ 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:17.229 rmmod nvme_tcp 00:39:17.229 rmmod nvme_fabrics 00:39:17.229 rmmod nvme_keyring 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1276237 ']' 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1276237 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1276237 ']' 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1276237 00:39:17.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1276237) - No such process 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1276237 is not found' 00:39:17.229 Process with pid 1276237 is not found 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:39:17.229 13:43:39 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:39:21.451 Waiting for block devices as requested 00:39:21.451 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:21.451 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:21.451 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:21.451 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:21.451 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:21.451 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:21.451 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:21.712 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:21.712 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:39:21.972 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:39:21.972 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:39:21.972 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:39:21.972 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:39:22.231 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:39:22.231 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:39:22.231 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:39:22.231 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:39:22.802 13:43:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:24.714 13:43:47 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:24.714 00:39:24.714 real 0m54.241s 00:39:24.714 user 1m5.377s 00:39:24.714 sys 0m20.545s 00:39:24.714 13:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:24.714 13:43:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:39:24.714 ************************************ 00:39:24.714 END TEST nvmf_abort_qd_sizes 00:39:24.714 ************************************ 00:39:24.714 13:43:47 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:24.714 13:43:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:39:24.714 13:43:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:24.714 13:43:47 -- common/autotest_common.sh@10 -- # set +x 00:39:24.714 ************************************ 00:39:24.715 START TEST keyring_file 00:39:24.715 ************************************ 00:39:24.715 13:43:47 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:39:24.976 * Looking for test storage... 00:39:24.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:24.976 13:43:47 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:24.976 13:43:47 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:39:24.976 13:43:47 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:24.976 13:43:47 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@345 -- # : 1 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@353 -- # local d=1 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@355 -- # echo 1 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@353 -- # local d=2 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@355 -- # echo 2 00:39:24.976 13:43:47 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:39:24.977 13:43:47 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:24.977 13:43:47 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:24.977 13:43:47 keyring_file -- scripts/common.sh@368 -- # return 0 00:39:24.977 13:43:47 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:24.977 13:43:47 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.977 --rc genhtml_branch_coverage=1 00:39:24.977 --rc genhtml_function_coverage=1 00:39:24.977 --rc genhtml_legend=1 00:39:24.977 --rc geninfo_all_blocks=1 00:39:24.977 --rc geninfo_unexecuted_blocks=1 00:39:24.977 00:39:24.977 ' 00:39:24.977 13:43:47 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.977 --rc genhtml_branch_coverage=1 00:39:24.977 --rc genhtml_function_coverage=1 00:39:24.977 --rc genhtml_legend=1 00:39:24.977 --rc geninfo_all_blocks=1 00:39:24.977 --rc geninfo_unexecuted_blocks=1 00:39:24.977 00:39:24.977 ' 00:39:24.977 13:43:47 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.977 --rc genhtml_branch_coverage=1 00:39:24.977 --rc genhtml_function_coverage=1 00:39:24.977 --rc genhtml_legend=1 00:39:24.977 --rc geninfo_all_blocks=1 00:39:24.977 --rc geninfo_unexecuted_blocks=1 00:39:24.977 00:39:24.977 ' 00:39:24.977 13:43:47 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:24.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:24.977 --rc genhtml_branch_coverage=1 00:39:24.977 --rc genhtml_function_coverage=1 00:39:24.977 --rc genhtml_legend=1 00:39:24.977 --rc geninfo_all_blocks=1 00:39:24.977 --rc geninfo_unexecuted_blocks=1 00:39:24.977 00:39:24.977 ' 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:24.977 13:43:47 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:39:24.977 13:43:47 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:24.977 13:43:47 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:24.977 13:43:47 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:24.977 13:43:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.977 13:43:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.977 13:43:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.977 13:43:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:39:24.977 13:43:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@51 -- # : 0 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:24.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wK3P7WZJ7L 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:24.977 13:43:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wK3P7WZJ7L 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wK3P7WZJ7L 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.wK3P7WZJ7L 00:39:24.977 13:43:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:24.977 13:43:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:25.239 13:43:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3OHwYM3qTo 00:39:25.239 13:43:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:25.239 13:43:47 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:25.239 13:43:47 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:25.239 13:43:47 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:25.239 13:43:47 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:25.239 13:43:47 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:25.239 13:43:47 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:25.239 13:43:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3OHwYM3qTo 00:39:25.239 13:43:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3OHwYM3qTo 00:39:25.239 13:43:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3OHwYM3qTo 00:39:25.239 13:43:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=1287151 00:39:25.239 13:43:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1287151 00:39:25.239 13:43:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:25.239 13:43:47 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1287151 ']' 00:39:25.239 13:43:47 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:25.239 13:43:47 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:25.239 13:43:47 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:25.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:25.239 13:43:47 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:25.239 13:43:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:25.239 [2024-12-05 13:43:47.644909] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:39:25.239 [2024-12-05 13:43:47.644964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287151 ] 00:39:25.239 [2024-12-05 13:43:47.723933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.239 [2024-12-05 13:43:47.760312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:26.181 13:43:48 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:26.181 [2024-12-05 13:43:48.442751] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:26.181 null0 00:39:26.181 [2024-12-05 13:43:48.474805] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:26.181 [2024-12-05 13:43:48.475038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:26.181 13:43:48 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:26.181 [2024-12-05 13:43:48.506881] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:39:26.181 request: 00:39:26.181 { 00:39:26.181 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:39:26.181 "secure_channel": false, 00:39:26.181 "listen_address": { 00:39:26.181 "trtype": "tcp", 00:39:26.181 "traddr": "127.0.0.1", 00:39:26.181 "trsvcid": "4420" 00:39:26.181 }, 00:39:26.181 "method": "nvmf_subsystem_add_listener", 00:39:26.181 "req_id": 1 00:39:26.181 } 00:39:26.181 Got JSON-RPC error response 00:39:26.181 response: 00:39:26.181 { 00:39:26.181 "code": -32602, 00:39:26.181 "message": "Invalid parameters" 00:39:26.181 } 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:26.181 13:43:48 keyring_file -- keyring/file.sh@47 -- # bperfpid=1287178 00:39:26.181 13:43:48 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1287178 /var/tmp/bperf.sock 00:39:26.181 13:43:48 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1287178 ']' 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:26.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:26.181 13:43:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:26.181 [2024-12-05 13:43:48.576449] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:39:26.181 [2024-12-05 13:43:48.576514] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1287178 ] 00:39:26.181 [2024-12-05 13:43:48.671979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:26.181 [2024-12-05 13:43:48.708356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.122 13:43:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:27.122 13:43:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:27.122 13:43:49 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:27.122 13:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:27.122 13:43:49 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3OHwYM3qTo 00:39:27.122 13:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3OHwYM3qTo 00:39:27.122 13:43:49 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:39:27.122 13:43:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:39:27.122 13:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:27.122 13:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:27.122 13:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.383 13:43:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.wK3P7WZJ7L == \/\t\m\p\/\t\m\p\.\w\K\3\P\7\W\Z\J\7\L ]] 00:39:27.383 13:43:49 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:39:27.383 13:43:49 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:39:27.383 13:43:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:27.383 13:43:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:27.383 13:43:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.644 13:43:50 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.3OHwYM3qTo == \/\t\m\p\/\t\m\p\.\3\O\H\w\Y\M\3\q\T\o ]] 00:39:27.644 13:43:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.644 13:43:50 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:39:27.644 13:43:50 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:27.644 13:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:27.905 13:43:50 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:39:27.905 13:43:50 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:27.905 13:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:28.165 [2024-12-05 13:43:50.515843] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:28.165 nvme0n1 00:39:28.166 13:43:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:39:28.166 13:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:28.166 13:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:28.166 13:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:28.166 13:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:28.166 13:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:28.426 13:43:50 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:39:28.426 13:43:50 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:39:28.426 13:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:28.426 13:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:28.426 13:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:28.426 13:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:28.426 13:43:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:28.426 13:43:50 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:39:28.426 13:43:50 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:28.686 Running I/O for 1 seconds... 00:39:29.627 16093.00 IOPS, 62.86 MiB/s 00:39:29.627 Latency(us) 00:39:29.627 [2024-12-05T12:43:52.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.627 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:39:29.627 nvme0n1 : 1.01 16104.02 62.91 0.00 0.00 7917.03 5898.24 21080.75 00:39:29.627 [2024-12-05T12:43:52.195Z] =================================================================================================================== 00:39:29.627 [2024-12-05T12:43:52.195Z] Total : 16104.02 62.91 0.00 0.00 7917.03 5898.24 21080.75 00:39:29.627 { 00:39:29.627 "results": [ 00:39:29.627 { 00:39:29.627 "job": "nvme0n1", 00:39:29.627 "core_mask": "0x2", 00:39:29.627 "workload": "randrw", 00:39:29.627 "percentage": 50, 00:39:29.627 "status": "finished", 00:39:29.627 "queue_depth": 128, 00:39:29.627 "io_size": 4096, 00:39:29.627 "runtime": 1.007326, 00:39:29.627 "iops": 16104.021935301978, 00:39:29.627 "mibps": 62.90633568477335, 00:39:29.627 "io_failed": 0, 00:39:29.627 "io_timeout": 0, 00:39:29.627 "avg_latency_us": 7917.032875518843, 00:39:29.627 "min_latency_us": 5898.24, 00:39:29.627 "max_latency_us": 21080.746666666666 00:39:29.627 } 00:39:29.627 ], 00:39:29.627 "core_count": 1 00:39:29.627 } 00:39:29.627 13:43:52 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:29.627 13:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:29.887 13:43:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:29.887 13:43:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:39:29.887 13:43:52 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:29.887 13:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:30.147 13:43:52 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:39:30.148 13:43:52 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:30.148 13:43:52 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:30.148 13:43:52 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:30.148 13:43:52 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:30.148 13:43:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:30.148 13:43:52 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:30.148 13:43:52 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:30.148 13:43:52 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:30.148 13:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:39:30.409 [2024-12-05 13:43:52.777880] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:30.409 [2024-12-05 13:43:52.778168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19209f0 (107): Transport endpoint is not connected 00:39:30.409 [2024-12-05 13:43:52.779163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19209f0 (9): Bad file descriptor 00:39:30.409 [2024-12-05 13:43:52.780166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:30.409 [2024-12-05 13:43:52.780178] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:30.409 [2024-12-05 13:43:52.780184] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:30.409 [2024-12-05 13:43:52.780191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:30.409 request: 00:39:30.409 { 00:39:30.409 "name": "nvme0", 00:39:30.409 "trtype": "tcp", 00:39:30.409 "traddr": "127.0.0.1", 00:39:30.409 "adrfam": "ipv4", 00:39:30.409 "trsvcid": "4420", 00:39:30.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:30.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:30.409 "prchk_reftag": false, 00:39:30.409 "prchk_guard": false, 00:39:30.409 "hdgst": false, 00:39:30.409 "ddgst": false, 00:39:30.409 "psk": "key1", 00:39:30.409 "allow_unrecognized_csi": false, 00:39:30.409 "method": "bdev_nvme_attach_controller", 00:39:30.409 "req_id": 1 00:39:30.409 } 00:39:30.409 Got JSON-RPC error response 00:39:30.409 response: 00:39:30.409 { 00:39:30.409 "code": -5, 00:39:30.409 "message": "Input/output error" 00:39:30.409 } 00:39:30.409 13:43:52 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:30.409 13:43:52 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:30.409 13:43:52 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:30.409 13:43:52 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:30.409 13:43:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:39:30.409 13:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:30.409 13:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:30.409 13:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:30.409 13:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:30.409 13:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:30.409 13:43:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:39:30.409 13:43:52 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:39:30.409 13:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:30.409 13:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:30.669 13:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:30.669 13:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:30.669 13:43:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:30.669 13:43:53 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:39:30.669 13:43:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:39:30.669 13:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:30.930 13:43:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:39:30.930 13:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:39:31.191 13:43:53 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:39:31.191 13:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:31.191 13:43:53 keyring_file -- keyring/file.sh@78 -- # jq length 00:39:31.191 13:43:53 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:39:31.191 13:43:53 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.wK3P7WZJ7L 00:39:31.191 13:43:53 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:31.191 13:43:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:31.191 13:43:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:31.191 13:43:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:31.191 13:43:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:31.191 13:43:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:31.191 13:43:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:31.191 13:43:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:31.191 13:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:31.452 [2024-12-05 13:43:53.837896] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wK3P7WZJ7L': 0100660 00:39:31.452 [2024-12-05 13:43:53.837918] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:39:31.452 request: 00:39:31.452 { 00:39:31.452 "name": "key0", 00:39:31.452 "path": "/tmp/tmp.wK3P7WZJ7L", 00:39:31.452 "method": "keyring_file_add_key", 00:39:31.452 "req_id": 1 00:39:31.452 } 00:39:31.452 Got JSON-RPC error response 00:39:31.452 response: 00:39:31.452 { 00:39:31.452 "code": -1, 00:39:31.452 "message": "Operation not permitted" 00:39:31.452 } 00:39:31.452 13:43:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:31.452 13:43:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:31.452 13:43:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:31.452 13:43:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:31.452 13:43:53 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.wK3P7WZJ7L 00:39:31.452 13:43:53 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:31.452 13:43:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.wK3P7WZJ7L 00:39:31.452 13:43:54 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.wK3P7WZJ7L 00:39:31.712 13:43:54 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:39:31.713 13:43:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:31.713 13:43:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:31.713 13:43:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:31.713 13:43:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:31.713 13:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:31.713 13:43:54 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:39:31.713 13:43:54 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:31.713 13:43:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:39:31.713 13:43:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:31.713 13:43:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:31.713 13:43:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:31.713 13:43:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:31.713 13:43:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:31.713 13:43:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:31.713 13:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:31.973 [2024-12-05 13:43:54.347191] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.wK3P7WZJ7L': No such file or directory 00:39:31.973 [2024-12-05 13:43:54.347207] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:39:31.973 [2024-12-05 13:43:54.347221] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:39:31.973 [2024-12-05 13:43:54.347227] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:39:31.973 [2024-12-05 13:43:54.347232] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:39:31.973 [2024-12-05 13:43:54.347237] bdev_nvme.c:6769:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:39:31.973 request: 00:39:31.973 { 00:39:31.973 "name": "nvme0", 00:39:31.973 "trtype": "tcp", 00:39:31.973 "traddr": "127.0.0.1", 00:39:31.973 "adrfam": "ipv4", 00:39:31.973 "trsvcid": "4420", 00:39:31.973 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:31.973 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:31.973 "prchk_reftag": false, 00:39:31.973 "prchk_guard": false, 00:39:31.973 "hdgst": false, 00:39:31.973 "ddgst": false, 00:39:31.973 "psk": "key0", 00:39:31.973 "allow_unrecognized_csi": false, 00:39:31.973 "method": "bdev_nvme_attach_controller", 00:39:31.973 "req_id": 1 00:39:31.973 } 00:39:31.973 Got JSON-RPC error response 00:39:31.973 response: 00:39:31.973 { 00:39:31.973 "code": -19, 00:39:31.973 "message": "No such device" 00:39:31.973 } 00:39:31.973 13:43:54 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:39:31.973 13:43:54 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:31.973 13:43:54 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:31.973 13:43:54 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:31.973 13:43:54 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:31.973 13:43:54 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Qwf9Tc6y9y 00:39:31.973 13:43:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:31.973 13:43:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:32.234 13:43:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:39:32.234 13:43:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:32.234 13:43:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:32.234 13:43:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:39:32.234 13:43:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:39:32.234 13:43:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Qwf9Tc6y9y 00:39:32.234 13:43:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Qwf9Tc6y9y 00:39:32.234 13:43:54 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Qwf9Tc6y9y 00:39:32.234 13:43:54 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qwf9Tc6y9y 00:39:32.234 13:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qwf9Tc6y9y 00:39:32.234 13:43:54 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:32.234 13:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:32.495 nvme0n1 00:39:32.495 13:43:54 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:39:32.495 13:43:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:32.495 13:43:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:32.495 13:43:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:32.495 13:43:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:32.495 13:43:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:32.755 13:43:55 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:39:32.755 13:43:55 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:39:32.755 13:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:39:33.015 13:43:55 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:39:33.015 13:43:55 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:33.015 13:43:55 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:39:33.015 13:43:55 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:33.015 13:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:33.275 13:43:55 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:39:33.275 13:43:55 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:33.275 13:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:33.536 13:43:55 keyring_file -- keyring/file.sh@105 -- # jq length 00:39:33.536 13:43:55 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:39:33.536 13:43:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:33.536 13:43:56 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:39:33.536 13:43:56 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Qwf9Tc6y9y 00:39:33.536 13:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Qwf9Tc6y9y 00:39:33.797 13:43:56 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3OHwYM3qTo 00:39:33.797 13:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3OHwYM3qTo 00:39:34.058 13:43:56 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:34.058 13:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:39:34.058 nvme0n1 00:39:34.058 13:43:56 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:39:34.058 13:43:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:39:34.319 13:43:56 keyring_file -- keyring/file.sh@113 -- # config='{ 00:39:34.319 "subsystems": [ 00:39:34.319 { 00:39:34.320 "subsystem": "keyring", 00:39:34.320 "config": [ 00:39:34.320 { 00:39:34.320 "method": "keyring_file_add_key", 00:39:34.320 "params": { 00:39:34.320 "name": "key0", 00:39:34.320 "path": "/tmp/tmp.Qwf9Tc6y9y" 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "keyring_file_add_key", 00:39:34.320 "params": { 00:39:34.320 "name": "key1", 00:39:34.320 "path": "/tmp/tmp.3OHwYM3qTo" 00:39:34.320 } 00:39:34.320 } 00:39:34.320 ] 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "subsystem": "iobuf", 00:39:34.320 "config": [ 00:39:34.320 { 00:39:34.320 "method": "iobuf_set_options", 00:39:34.320 "params": { 00:39:34.320 "small_pool_count": 8192, 00:39:34.320 "large_pool_count": 1024, 00:39:34.320 "small_bufsize": 8192, 00:39:34.320 "large_bufsize": 135168, 00:39:34.320 "enable_numa": false 00:39:34.320 } 00:39:34.320 } 00:39:34.320 ] 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "subsystem": "sock", 00:39:34.320 "config": [ 00:39:34.320 { 00:39:34.320 "method": "sock_set_default_impl", 00:39:34.320 "params": { 00:39:34.320 "impl_name": "posix" 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "sock_impl_set_options", 00:39:34.320 "params": { 00:39:34.320 "impl_name": "ssl", 00:39:34.320 "recv_buf_size": 4096, 00:39:34.320 "send_buf_size": 4096, 00:39:34.320 "enable_recv_pipe": true, 00:39:34.320 "enable_quickack": false, 00:39:34.320 "enable_placement_id": 0, 00:39:34.320 "enable_zerocopy_send_server": true, 00:39:34.320 "enable_zerocopy_send_client": false, 00:39:34.320 "zerocopy_threshold": 0, 00:39:34.320 "tls_version": 0, 00:39:34.320 "enable_ktls": false 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "sock_impl_set_options", 00:39:34.320 "params": { 00:39:34.320 "impl_name": "posix", 00:39:34.320 "recv_buf_size": 2097152, 00:39:34.320 "send_buf_size": 2097152, 00:39:34.320 "enable_recv_pipe": true, 00:39:34.320 "enable_quickack": false, 00:39:34.320 "enable_placement_id": 0, 00:39:34.320 "enable_zerocopy_send_server": true, 00:39:34.320 "enable_zerocopy_send_client": false, 00:39:34.320 "zerocopy_threshold": 0, 00:39:34.320 "tls_version": 0, 00:39:34.320 "enable_ktls": false 00:39:34.320 } 00:39:34.320 } 00:39:34.320 ] 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "subsystem": "vmd", 00:39:34.320 "config": [] 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "subsystem": "accel", 00:39:34.320 "config": [ 00:39:34.320 { 00:39:34.320 "method": "accel_set_options", 00:39:34.320 "params": { 00:39:34.320 "small_cache_size": 128, 00:39:34.320 "large_cache_size": 16, 00:39:34.320 "task_count": 2048, 00:39:34.320 "sequence_count": 2048, 00:39:34.320 "buf_count": 2048 00:39:34.320 } 00:39:34.320 } 00:39:34.320 ] 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "subsystem": "bdev", 00:39:34.320 "config": [ 00:39:34.320 { 00:39:34.320 "method": "bdev_set_options", 00:39:34.320 "params": { 00:39:34.320 "bdev_io_pool_size": 65535, 00:39:34.320 "bdev_io_cache_size": 256, 00:39:34.320 "bdev_auto_examine": true, 00:39:34.320 "iobuf_small_cache_size": 128, 00:39:34.320 "iobuf_large_cache_size": 16 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "bdev_raid_set_options", 00:39:34.320 "params": { 00:39:34.320 "process_window_size_kb": 1024, 00:39:34.320 "process_max_bandwidth_mb_sec": 0 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "bdev_iscsi_set_options", 00:39:34.320 "params": { 00:39:34.320 "timeout_sec": 30 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "bdev_nvme_set_options", 00:39:34.320 "params": { 00:39:34.320 "action_on_timeout": "none", 00:39:34.320 "timeout_us": 0, 00:39:34.320 "timeout_admin_us": 0, 00:39:34.320 "keep_alive_timeout_ms": 10000, 00:39:34.320 "arbitration_burst": 0, 00:39:34.320 "low_priority_weight": 0, 00:39:34.320 "medium_priority_weight": 0, 00:39:34.320 "high_priority_weight": 0, 00:39:34.320 "nvme_adminq_poll_period_us": 10000, 00:39:34.320 "nvme_ioq_poll_period_us": 0, 00:39:34.320 "io_queue_requests": 512, 00:39:34.320 "delay_cmd_submit": true, 00:39:34.320 "transport_retry_count": 4, 00:39:34.320 "bdev_retry_count": 3, 00:39:34.320 "transport_ack_timeout": 0, 00:39:34.320 "ctrlr_loss_timeout_sec": 0, 00:39:34.320 "reconnect_delay_sec": 0, 00:39:34.320 "fast_io_fail_timeout_sec": 0, 00:39:34.320 "disable_auto_failback": false, 00:39:34.320 "generate_uuids": false, 00:39:34.320 "transport_tos": 0, 00:39:34.320 "nvme_error_stat": false, 00:39:34.320 "rdma_srq_size": 0, 00:39:34.320 "io_path_stat": false, 00:39:34.320 "allow_accel_sequence": false, 00:39:34.320 "rdma_max_cq_size": 0, 00:39:34.320 "rdma_cm_event_timeout_ms": 0, 00:39:34.320 "dhchap_digests": [ 00:39:34.320 "sha256", 00:39:34.320 "sha384", 00:39:34.320 "sha512" 00:39:34.320 ], 00:39:34.320 "dhchap_dhgroups": [ 00:39:34.320 "null", 00:39:34.320 "ffdhe2048", 00:39:34.320 "ffdhe3072", 00:39:34.320 "ffdhe4096", 00:39:34.320 "ffdhe6144", 00:39:34.320 "ffdhe8192" 00:39:34.320 ] 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "bdev_nvme_attach_controller", 00:39:34.320 "params": { 00:39:34.320 "name": "nvme0", 00:39:34.320 "trtype": "TCP", 00:39:34.320 "adrfam": "IPv4", 00:39:34.320 "traddr": "127.0.0.1", 00:39:34.320 "trsvcid": "4420", 00:39:34.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:34.320 "prchk_reftag": false, 00:39:34.320 "prchk_guard": false, 00:39:34.320 "ctrlr_loss_timeout_sec": 0, 00:39:34.320 "reconnect_delay_sec": 0, 00:39:34.320 "fast_io_fail_timeout_sec": 0, 00:39:34.320 "psk": "key0", 00:39:34.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:34.320 "hdgst": false, 00:39:34.320 "ddgst": false, 00:39:34.320 "multipath": "multipath" 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "bdev_nvme_set_hotplug", 00:39:34.320 "params": { 00:39:34.320 "period_us": 100000, 00:39:34.320 "enable": false 00:39:34.320 } 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "method": "bdev_wait_for_examine" 00:39:34.320 } 00:39:34.320 ] 00:39:34.320 }, 00:39:34.320 { 00:39:34.320 "subsystem": "nbd", 00:39:34.320 "config": [] 00:39:34.320 } 00:39:34.320 ] 00:39:34.320 }' 00:39:34.320 13:43:56 keyring_file -- keyring/file.sh@115 -- # killprocess 1287178 00:39:34.320 13:43:56 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1287178 ']' 00:39:34.320 13:43:56 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1287178 00:39:34.320 13:43:56 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:34.320 13:43:56 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:34.320 13:43:56 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1287178 00:39:34.583 13:43:56 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:34.583 13:43:56 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:34.583 13:43:56 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1287178' 00:39:34.583 killing process with pid 1287178 00:39:34.583 13:43:56 keyring_file -- common/autotest_common.sh@973 -- # kill 1287178 00:39:34.583 Received shutdown signal, test time was about 1.000000 seconds 00:39:34.583 00:39:34.583 Latency(us) 00:39:34.583 [2024-12-05T12:43:57.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:34.583 [2024-12-05T12:43:57.151Z] =================================================================================================================== 00:39:34.583 [2024-12-05T12:43:57.151Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:34.583 13:43:56 keyring_file -- common/autotest_common.sh@978 -- # wait 1287178 00:39:34.583 13:43:57 keyring_file -- keyring/file.sh@118 -- # bperfpid=1288971 00:39:34.583 13:43:57 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1288971 /var/tmp/bperf.sock 00:39:34.583 13:43:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1288971 ']' 00:39:34.583 13:43:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:34.583 13:43:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:34.583 13:43:57 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:39:34.583 13:43:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:34.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:34.583 13:43:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:34.583 13:43:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:34.583 13:43:57 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:39:34.583 "subsystems": [ 00:39:34.583 { 00:39:34.583 "subsystem": "keyring", 00:39:34.583 "config": [ 00:39:34.583 { 00:39:34.583 "method": "keyring_file_add_key", 00:39:34.583 "params": { 00:39:34.583 "name": "key0", 00:39:34.583 "path": "/tmp/tmp.Qwf9Tc6y9y" 00:39:34.583 } 00:39:34.583 }, 00:39:34.583 { 00:39:34.583 "method": "keyring_file_add_key", 00:39:34.583 "params": { 00:39:34.583 "name": "key1", 00:39:34.583 "path": "/tmp/tmp.3OHwYM3qTo" 00:39:34.583 } 00:39:34.583 } 00:39:34.583 ] 00:39:34.583 }, 00:39:34.583 { 00:39:34.583 "subsystem": "iobuf", 00:39:34.583 "config": [ 00:39:34.583 { 00:39:34.584 "method": "iobuf_set_options", 00:39:34.584 "params": { 00:39:34.584 "small_pool_count": 8192, 00:39:34.584 "large_pool_count": 1024, 00:39:34.584 "small_bufsize": 8192, 00:39:34.584 "large_bufsize": 135168, 00:39:34.584 "enable_numa": false 00:39:34.584 } 00:39:34.584 } 00:39:34.584 ] 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "subsystem": "sock", 00:39:34.584 "config": [ 00:39:34.584 { 00:39:34.584 "method": "sock_set_default_impl", 00:39:34.584 "params": { 00:39:34.584 "impl_name": "posix" 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "sock_impl_set_options", 00:39:34.584 "params": { 00:39:34.584 "impl_name": "ssl", 00:39:34.584 "recv_buf_size": 4096, 00:39:34.584 "send_buf_size": 4096, 00:39:34.584 "enable_recv_pipe": true, 00:39:34.584 "enable_quickack": false, 00:39:34.584 "enable_placement_id": 0, 00:39:34.584 "enable_zerocopy_send_server": true, 00:39:34.584 "enable_zerocopy_send_client": false, 00:39:34.584 "zerocopy_threshold": 0, 00:39:34.584 "tls_version": 0, 00:39:34.584 "enable_ktls": false 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "sock_impl_set_options", 00:39:34.584 "params": { 00:39:34.584 "impl_name": "posix", 00:39:34.584 "recv_buf_size": 2097152, 00:39:34.584 "send_buf_size": 2097152, 00:39:34.584 "enable_recv_pipe": true, 00:39:34.584 "enable_quickack": false, 00:39:34.584 "enable_placement_id": 0, 00:39:34.584 "enable_zerocopy_send_server": true, 00:39:34.584 "enable_zerocopy_send_client": false, 00:39:34.584 "zerocopy_threshold": 0, 00:39:34.584 "tls_version": 0, 00:39:34.584 "enable_ktls": false 00:39:34.584 } 00:39:34.584 } 00:39:34.584 ] 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "subsystem": "vmd", 00:39:34.584 "config": [] 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "subsystem": "accel", 00:39:34.584 "config": [ 00:39:34.584 { 00:39:34.584 "method": "accel_set_options", 00:39:34.584 "params": { 00:39:34.584 "small_cache_size": 128, 00:39:34.584 "large_cache_size": 16, 00:39:34.584 "task_count": 2048, 00:39:34.584 "sequence_count": 2048, 00:39:34.584 "buf_count": 2048 00:39:34.584 } 00:39:34.584 } 00:39:34.584 ] 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "subsystem": "bdev", 00:39:34.584 "config": [ 00:39:34.584 { 00:39:34.584 "method": "bdev_set_options", 00:39:34.584 "params": { 00:39:34.584 "bdev_io_pool_size": 65535, 00:39:34.584 "bdev_io_cache_size": 256, 00:39:34.584 "bdev_auto_examine": true, 00:39:34.584 "iobuf_small_cache_size": 128, 00:39:34.584 "iobuf_large_cache_size": 16 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "bdev_raid_set_options", 00:39:34.584 "params": { 00:39:34.584 "process_window_size_kb": 1024, 00:39:34.584 "process_max_bandwidth_mb_sec": 0 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "bdev_iscsi_set_options", 00:39:34.584 "params": { 00:39:34.584 "timeout_sec": 30 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "bdev_nvme_set_options", 00:39:34.584 "params": { 00:39:34.584 "action_on_timeout": "none", 00:39:34.584 "timeout_us": 0, 00:39:34.584 "timeout_admin_us": 0, 00:39:34.584 "keep_alive_timeout_ms": 10000, 00:39:34.584 "arbitration_burst": 0, 00:39:34.584 "low_priority_weight": 0, 00:39:34.584 "medium_priority_weight": 0, 00:39:34.584 "high_priority_weight": 0, 00:39:34.584 "nvme_adminq_poll_period_us": 10000, 00:39:34.584 "nvme_ioq_poll_period_us": 0, 00:39:34.584 "io_queue_requests": 512, 00:39:34.584 "delay_cmd_submit": true, 00:39:34.584 "transport_retry_count": 4, 00:39:34.584 "bdev_retry_count": 3, 00:39:34.584 "transport_ack_timeout": 0, 00:39:34.584 "ctrlr_loss_timeout_sec": 0, 00:39:34.584 "reconnect_delay_sec": 0, 00:39:34.584 "fast_io_fail_timeout_sec": 0, 00:39:34.584 "disable_auto_failback": false, 00:39:34.584 "generate_uuids": false, 00:39:34.584 "transport_tos": 0, 00:39:34.584 "nvme_error_stat": false, 00:39:34.584 "rdma_srq_size": 0, 00:39:34.584 "io_path_stat": false, 00:39:34.584 "allow_accel_sequence": false, 00:39:34.584 "rdma_max_cq_size": 0, 00:39:34.584 "rdma_cm_event_timeout_ms": 0, 00:39:34.584 "dhchap_digests": [ 00:39:34.584 "sha256", 00:39:34.584 "sha384", 00:39:34.584 "sha512" 00:39:34.584 ], 00:39:34.584 "dhchap_dhgroups": [ 00:39:34.584 "null", 00:39:34.584 "ffdhe2048", 00:39:34.584 "ffdhe3072", 00:39:34.584 "ffdhe4096", 00:39:34.584 "ffdhe6144", 00:39:34.584 "ffdhe8192" 00:39:34.584 ] 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "bdev_nvme_attach_controller", 00:39:34.584 "params": { 00:39:34.584 "name": "nvme0", 00:39:34.584 "trtype": "TCP", 00:39:34.584 "adrfam": "IPv4", 00:39:34.584 "traddr": "127.0.0.1", 00:39:34.584 "trsvcid": "4420", 00:39:34.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:34.584 "prchk_reftag": false, 00:39:34.584 "prchk_guard": false, 00:39:34.584 "ctrlr_loss_timeout_sec": 0, 00:39:34.584 "reconnect_delay_sec": 0, 00:39:34.584 "fast_io_fail_timeout_sec": 0, 00:39:34.584 "psk": "key0", 00:39:34.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:34.584 "hdgst": false, 00:39:34.584 "ddgst": false, 00:39:34.584 "multipath": "multipath" 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "bdev_nvme_set_hotplug", 00:39:34.584 "params": { 00:39:34.584 "period_us": 100000, 00:39:34.584 "enable": false 00:39:34.584 } 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "method": "bdev_wait_for_examine" 00:39:34.584 } 00:39:34.584 ] 00:39:34.584 }, 00:39:34.584 { 00:39:34.584 "subsystem": "nbd", 00:39:34.584 "config": [] 00:39:34.584 } 00:39:34.584 ] 00:39:34.584 }' 00:39:34.584 [2024-12-05 13:43:57.050836] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:39:34.584 [2024-12-05 13:43:57.050904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288971 ] 00:39:34.584 [2024-12-05 13:43:57.138653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:34.845 [2024-12-05 13:43:57.167962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:34.845 [2024-12-05 13:43:57.312509] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:35.417 13:43:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:35.417 13:43:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:39:35.417 13:43:57 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:39:35.417 13:43:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:35.417 13:43:57 keyring_file -- keyring/file.sh@121 -- # jq length 00:39:35.677 13:43:58 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:39:35.677 13:43:58 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:39:35.677 13:43:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:39:35.677 13:43:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:35.677 13:43:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:35.677 13:43:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:39:35.677 13:43:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:35.937 13:43:58 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:39:35.937 13:43:58 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:39:35.937 13:43:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:39:35.937 13:43:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:39:35.937 13:43:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:35.937 13:43:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:39:35.937 13:43:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:35.937 13:43:58 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:39:35.937 13:43:58 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:39:35.938 13:43:58 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:39:35.938 13:43:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:39:36.199 13:43:58 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:39:36.199 13:43:58 keyring_file -- keyring/file.sh@1 -- # cleanup 00:39:36.199 13:43:58 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Qwf9Tc6y9y /tmp/tmp.3OHwYM3qTo 00:39:36.199 13:43:58 keyring_file -- keyring/file.sh@20 -- # killprocess 1288971 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1288971 ']' 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1288971 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288971 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288971' 00:39:36.199 killing process with pid 1288971 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@973 -- # kill 1288971 00:39:36.199 Received shutdown signal, test time was about 1.000000 seconds 00:39:36.199 00:39:36.199 Latency(us) 00:39:36.199 [2024-12-05T12:43:58.767Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:36.199 [2024-12-05T12:43:58.767Z] =================================================================================================================== 00:39:36.199 [2024-12-05T12:43:58.767Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:39:36.199 13:43:58 keyring_file -- common/autotest_common.sh@978 -- # wait 1288971 00:39:36.644 13:43:58 keyring_file -- keyring/file.sh@21 -- # killprocess 1287151 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1287151 ']' 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1287151 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1287151 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1287151' 00:39:36.644 killing process with pid 1287151 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@973 -- # kill 1287151 00:39:36.644 13:43:58 keyring_file -- common/autotest_common.sh@978 -- # wait 1287151 00:39:36.644 00:39:36.644 real 0m11.816s 00:39:36.644 user 0m28.439s 00:39:36.644 sys 0m2.621s 00:39:36.644 13:43:59 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:36.644 13:43:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:39:36.644 ************************************ 00:39:36.644 END TEST keyring_file 00:39:36.644 ************************************ 00:39:36.644 13:43:59 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:39:36.644 13:43:59 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:36.644 13:43:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:36.644 13:43:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:36.644 13:43:59 -- common/autotest_common.sh@10 -- # set +x 00:39:36.644 ************************************ 00:39:36.644 START TEST keyring_linux 00:39:36.644 ************************************ 00:39:36.644 13:43:59 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:39:36.644 Joined session keyring: 401652439 00:39:36.921 * Looking for test storage... 00:39:36.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@345 -- # : 1 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@368 -- # return 0 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.921 --rc genhtml_branch_coverage=1 00:39:36.921 --rc genhtml_function_coverage=1 00:39:36.921 --rc genhtml_legend=1 00:39:36.921 --rc geninfo_all_blocks=1 00:39:36.921 --rc geninfo_unexecuted_blocks=1 00:39:36.921 00:39:36.921 ' 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.921 --rc genhtml_branch_coverage=1 00:39:36.921 --rc genhtml_function_coverage=1 00:39:36.921 --rc genhtml_legend=1 00:39:36.921 --rc geninfo_all_blocks=1 00:39:36.921 --rc geninfo_unexecuted_blocks=1 00:39:36.921 00:39:36.921 ' 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.921 --rc genhtml_branch_coverage=1 00:39:36.921 --rc genhtml_function_coverage=1 00:39:36.921 --rc genhtml_legend=1 00:39:36.921 --rc geninfo_all_blocks=1 00:39:36.921 --rc geninfo_unexecuted_blocks=1 00:39:36.921 00:39:36.921 ' 00:39:36.921 13:43:59 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:36.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:36.921 --rc genhtml_branch_coverage=1 00:39:36.921 --rc genhtml_function_coverage=1 00:39:36.921 --rc genhtml_legend=1 00:39:36.921 --rc geninfo_all_blocks=1 00:39:36.921 --rc geninfo_unexecuted_blocks=1 00:39:36.921 00:39:36.921 ' 00:39:36.921 13:43:59 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:39:36.921 13:43:59 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:36.921 13:43:59 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:36.921 13:43:59 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.921 13:43:59 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.921 13:43:59 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.921 13:43:59 keyring_linux -- paths/export.sh@5 -- # export PATH 00:39:36.921 13:43:59 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:39:36.921 13:43:59 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:39:36.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:39:36.922 /tmp/:spdk-test:key0 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:39:36.922 13:43:59 keyring_linux -- nvmf/common.sh@733 -- # python - 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:39:36.922 13:43:59 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:39:36.922 /tmp/:spdk-test:key1 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1289430 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1289430 00:39:36.922 13:43:59 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:39:36.922 13:43:59 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1289430 ']' 00:39:36.922 13:43:59 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:36.922 13:43:59 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:36.922 13:43:59 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:36.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:36.922 13:43:59 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:36.922 13:43:59 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:37.186 [2024-12-05 13:43:59.534107] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:39:37.186 [2024-12-05 13:43:59.534187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289430 ] 00:39:37.186 [2024-12-05 13:43:59.616583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.186 [2024-12-05 13:43:59.659308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.756 13:44:00 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:37.756 13:44:00 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:37.756 13:44:00 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:39:37.756 13:44:00 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.756 13:44:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:37.756 [2024-12-05 13:44:00.319651] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:38.018 null0 00:39:38.018 [2024-12-05 13:44:00.351695] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:39:38.018 [2024-12-05 13:44:00.352123] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:39:38.018 13:44:00 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:38.018 13:44:00 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:39:38.018 573459985 00:39:38.018 13:44:00 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:39:38.018 835928254 00:39:38.018 13:44:00 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1289752 00:39:38.018 13:44:00 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1289752 /var/tmp/bperf.sock 00:39:38.018 13:44:00 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:39:38.018 13:44:00 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1289752 ']' 00:39:38.018 13:44:00 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:39:38.018 13:44:00 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:38.018 13:44:00 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:39:38.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:39:38.018 13:44:00 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:38.018 13:44:00 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:38.018 [2024-12-05 13:44:00.430917] Starting SPDK v25.01-pre git sha1 0ee529aeb / DPDK 24.03.0 initialization... 00:39:38.018 [2024-12-05 13:44:00.430967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289752 ] 00:39:38.018 [2024-12-05 13:44:00.518198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:38.018 [2024-12-05 13:44:00.548137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:38.957 13:44:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:38.957 13:44:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:39:38.957 13:44:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:39:38.957 13:44:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:39:38.957 13:44:01 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:39:38.957 13:44:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:39:39.216 13:44:01 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:39.217 13:44:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:39.217 [2024-12-05 13:44:01.729546] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:39.476 nvme0n1 00:39:39.476 13:44:01 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:39.476 13:44:01 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:39.476 13:44:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:39.476 13:44:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:39.476 13:44:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.476 13:44:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:39.476 13:44:01 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:39.476 13:44:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:39.476 13:44:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:39.476 13:44:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:39.476 13:44:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:39.476 13:44:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:39.476 13:44:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:39.736 13:44:02 keyring_linux -- keyring/linux.sh@25 -- # sn=573459985 00:39:39.736 13:44:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:39.736 13:44:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:39.736 13:44:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 573459985 == \5\7\3\4\5\9\9\8\5 ]] 00:39:39.736 13:44:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 573459985 00:39:39.736 13:44:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:39.736 13:44:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:39.736 Running I/O for 1 seconds... 00:39:41.119 16519.00 IOPS, 64.53 MiB/s 00:39:41.119 Latency(us) 00:39:41.119 [2024-12-05T12:44:03.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:41.119 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:41.119 nvme0n1 : 1.01 16517.54 64.52 0.00 0.00 7716.22 6662.83 14308.69 00:39:41.119 [2024-12-05T12:44:03.687Z] =================================================================================================================== 00:39:41.119 [2024-12-05T12:44:03.687Z] Total : 16517.54 64.52 0.00 0.00 7716.22 6662.83 14308.69 00:39:41.119 { 00:39:41.119 "results": [ 00:39:41.119 { 00:39:41.119 "job": "nvme0n1", 00:39:41.119 "core_mask": "0x2", 00:39:41.119 "workload": "randread", 00:39:41.119 "status": "finished", 00:39:41.119 "queue_depth": 128, 00:39:41.119 "io_size": 4096, 00:39:41.119 "runtime": 1.007898, 00:39:41.119 "iops": 16517.544434059797, 00:39:41.119 "mibps": 64.52165794554608, 00:39:41.119 "io_failed": 0, 00:39:41.119 "io_timeout": 0, 00:39:41.119 "avg_latency_us": 7716.220534999199, 00:39:41.119 "min_latency_us": 6662.826666666667, 00:39:41.119 "max_latency_us": 14308.693333333333 00:39:41.119 } 00:39:41.119 ], 00:39:41.119 "core_count": 1 00:39:41.119 } 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:41.119 13:44:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:41.119 13:44:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:41.119 13:44:03 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:41.119 13:44:03 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:39:41.119 13:44:03 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:41.119 13:44:03 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:39:41.119 13:44:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:41.119 13:44:03 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:39:41.119 13:44:03 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:41.119 13:44:03 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:41.119 13:44:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:41.380 [2024-12-05 13:44:03.821989] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:41.380 [2024-12-05 13:44:03.822650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153d7a0 (107): Transport endpoint is not connected 00:39:41.380 [2024-12-05 13:44:03.823646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x153d7a0 (9): Bad file descriptor 00:39:41.380 [2024-12-05 13:44:03.824648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:41.380 [2024-12-05 13:44:03.824656] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:41.380 [2024-12-05 13:44:03.824662] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:41.380 [2024-12-05 13:44:03.824673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:41.380 request: 00:39:41.380 { 00:39:41.380 "name": "nvme0", 00:39:41.380 "trtype": "tcp", 00:39:41.380 "traddr": "127.0.0.1", 00:39:41.380 "adrfam": "ipv4", 00:39:41.380 "trsvcid": "4420", 00:39:41.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:41.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:41.380 "prchk_reftag": false, 00:39:41.380 "prchk_guard": false, 00:39:41.380 "hdgst": false, 00:39:41.380 "ddgst": false, 00:39:41.380 "psk": ":spdk-test:key1", 00:39:41.380 "allow_unrecognized_csi": false, 00:39:41.380 "method": "bdev_nvme_attach_controller", 00:39:41.380 "req_id": 1 00:39:41.380 } 00:39:41.380 Got JSON-RPC error response 00:39:41.381 response: 00:39:41.381 { 00:39:41.381 "code": -5, 00:39:41.381 "message": "Input/output error" 00:39:41.381 } 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@33 -- # sn=573459985 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 573459985 00:39:41.381 1 links removed 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@33 -- # sn=835928254 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 835928254 00:39:41.381 1 links removed 00:39:41.381 13:44:03 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1289752 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1289752 ']' 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1289752 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1289752 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1289752' 00:39:41.381 killing process with pid 1289752 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@973 -- # kill 1289752 00:39:41.381 Received shutdown signal, test time was about 1.000000 seconds 00:39:41.381 00:39:41.381 Latency(us) 00:39:41.381 [2024-12-05T12:44:03.949Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:41.381 [2024-12-05T12:44:03.949Z] =================================================================================================================== 00:39:41.381 [2024-12-05T12:44:03.949Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:41.381 13:44:03 keyring_linux -- common/autotest_common.sh@978 -- # wait 1289752 00:39:41.641 13:44:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1289430 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1289430 ']' 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1289430 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1289430 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1289430' 00:39:41.641 killing process with pid 1289430 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@973 -- # kill 1289430 00:39:41.641 13:44:04 keyring_linux -- common/autotest_common.sh@978 -- # wait 1289430 00:39:41.903 00:39:41.903 real 0m5.173s 00:39:41.903 user 0m9.491s 00:39:41.903 sys 0m1.423s 00:39:41.903 13:44:04 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:41.903 13:44:04 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:41.903 ************************************ 00:39:41.903 END TEST keyring_linux 00:39:41.903 ************************************ 00:39:41.903 13:44:04 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:39:41.903 13:44:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:41.903 13:44:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:41.903 13:44:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:39:41.903 13:44:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:39:41.903 13:44:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:39:41.903 13:44:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:39:41.903 13:44:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:41.903 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:39:41.903 13:44:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:39:41.903 13:44:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:39:41.903 13:44:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:39:41.903 13:44:04 -- common/autotest_common.sh@10 -- # set +x 00:39:50.075 INFO: APP EXITING 00:39:50.075 INFO: killing all VMs 00:39:50.075 INFO: killing vhost app 00:39:50.075 INFO: EXIT DONE 00:39:53.375 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:53.375 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:53.375 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:53.635 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:53.635 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:53.635 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:53.635 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:53.635 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:53.635 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:53.635 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:57.850 Cleaning 00:39:57.850 Removing: /var/run/dpdk/spdk0/config 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:57.850 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:57.850 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:57.850 Removing: /var/run/dpdk/spdk1/config 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:57.850 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:57.850 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:57.850 Removing: /var/run/dpdk/spdk2/config 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:57.850 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:57.850 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:57.850 Removing: /var/run/dpdk/spdk3/config 00:39:57.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:57.850 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:57.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:57.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:57.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:57.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:57.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:57.851 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:57.851 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:57.851 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:57.851 Removing: /var/run/dpdk/spdk4/config 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:57.851 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:57.851 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:57.851 Removing: /dev/shm/bdev_svc_trace.1 00:39:57.851 Removing: /dev/shm/nvmf_trace.0 00:39:57.851 Removing: /dev/shm/spdk_tgt_trace.pid673645 00:39:57.851 Removing: /var/run/dpdk/spdk0 00:39:57.851 Removing: /var/run/dpdk/spdk1 00:39:57.851 Removing: /var/run/dpdk/spdk2 00:39:57.851 Removing: /var/run/dpdk/spdk3 00:39:57.851 Removing: /var/run/dpdk/spdk4 00:39:57.851 Removing: /var/run/dpdk/spdk_pid1000036 00:39:57.851 Removing: /var/run/dpdk/spdk_pid1001592 00:39:57.851 Removing: /var/run/dpdk/spdk_pid1003311 00:39:57.851 Removing: /var/run/dpdk/spdk_pid1005055 00:39:57.851 Removing: /var/run/dpdk/spdk_pid1011219 00:39:57.851 Removing: /var/run/dpdk/spdk_pid1017036 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1022417 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1032658 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1032774 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1038287 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1038625 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1038951 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1039300 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1039327 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1045494 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1046745 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1052620 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1055961 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1063013 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1070040 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1080547 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1089992 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1089994 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1115583 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1116383 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1117204 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1117955 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1119017 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1119709 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1120385 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1121092 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1126798 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1127136 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1134854 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1135229 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1142045 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1147750 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1160215 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1160936 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1166451 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1166824 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1172509 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1179783 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1182678 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1196042 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1208440 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1210418 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1211474 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1232482 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1237552 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1240798 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1248547 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1248566 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1255258 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1257743 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1260300 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1261867 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1264217 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1265641 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1276541 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1277030 00:39:58.111 Removing: /var/run/dpdk/spdk_pid1277615 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1280693 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1281358 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1282026 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1287151 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1287178 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1288971 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1289430 00:39:58.374 Removing: /var/run/dpdk/spdk_pid1289752 00:39:58.374 Removing: /var/run/dpdk/spdk_pid671943 00:39:58.374 Removing: /var/run/dpdk/spdk_pid673645 00:39:58.374 Removing: /var/run/dpdk/spdk_pid674283 00:39:58.374 Removing: /var/run/dpdk/spdk_pid675358 00:39:58.374 Removing: /var/run/dpdk/spdk_pid675662 00:39:58.374 Removing: /var/run/dpdk/spdk_pid676788 00:39:58.374 Removing: /var/run/dpdk/spdk_pid677059 00:39:58.374 Removing: /var/run/dpdk/spdk_pid677448 00:39:58.374 Removing: /var/run/dpdk/spdk_pid678500 00:39:58.374 Removing: /var/run/dpdk/spdk_pid679212 00:39:58.374 Removing: /var/run/dpdk/spdk_pid679622 00:39:58.374 Removing: /var/run/dpdk/spdk_pid680025 00:39:58.374 Removing: /var/run/dpdk/spdk_pid680433 00:39:58.374 Removing: /var/run/dpdk/spdk_pid680837 00:39:58.374 Removing: /var/run/dpdk/spdk_pid681323 00:39:58.374 Removing: /var/run/dpdk/spdk_pid681669 00:39:58.374 Removing: /var/run/dpdk/spdk_pid682068 00:39:58.374 Removing: /var/run/dpdk/spdk_pid683113 00:39:58.374 Removing: /var/run/dpdk/spdk_pid686384 00:39:58.374 Removing: /var/run/dpdk/spdk_pid686754 00:39:58.374 Removing: /var/run/dpdk/spdk_pid687125 00:39:58.374 Removing: /var/run/dpdk/spdk_pid687376 00:39:58.374 Removing: /var/run/dpdk/spdk_pid687831 00:39:58.374 Removing: /var/run/dpdk/spdk_pid687838 00:39:58.374 Removing: /var/run/dpdk/spdk_pid688214 00:39:58.374 Removing: /var/run/dpdk/spdk_pid688545 00:39:58.374 Removing: /var/run/dpdk/spdk_pid688903 00:39:58.374 Removing: /var/run/dpdk/spdk_pid688924 00:39:58.374 Removing: /var/run/dpdk/spdk_pid689284 00:39:58.374 Removing: /var/run/dpdk/spdk_pid689317 00:39:58.374 Removing: /var/run/dpdk/spdk_pid689979 00:39:58.374 Removing: /var/run/dpdk/spdk_pid690120 00:39:58.374 Removing: /var/run/dpdk/spdk_pid690503 00:39:58.374 Removing: /var/run/dpdk/spdk_pid695706 00:39:58.374 Removing: /var/run/dpdk/spdk_pid701736 00:39:58.374 Removing: /var/run/dpdk/spdk_pid714245 00:39:58.374 Removing: /var/run/dpdk/spdk_pid715035 00:39:58.374 Removing: /var/run/dpdk/spdk_pid720802 00:39:58.374 Removing: /var/run/dpdk/spdk_pid721297 00:39:58.374 Removing: /var/run/dpdk/spdk_pid727013 00:39:58.374 Removing: /var/run/dpdk/spdk_pid734998 00:39:58.374 Removing: /var/run/dpdk/spdk_pid738264 00:39:58.374 Removing: /var/run/dpdk/spdk_pid751905 00:39:58.374 Removing: /var/run/dpdk/spdk_pid763999 00:39:58.374 Removing: /var/run/dpdk/spdk_pid766076 00:39:58.374 Removing: /var/run/dpdk/spdk_pid767246 00:39:58.374 Removing: /var/run/dpdk/spdk_pid790076 00:39:58.374 Removing: /var/run/dpdk/spdk_pid795860 00:39:58.374 Removing: /var/run/dpdk/spdk_pid856692 00:39:58.374 Removing: /var/run/dpdk/spdk_pid863701 00:39:58.635 Removing: /var/run/dpdk/spdk_pid871238 00:39:58.635 Removing: /var/run/dpdk/spdk_pid879818 00:39:58.635 Removing: /var/run/dpdk/spdk_pid879820 00:39:58.635 Removing: /var/run/dpdk/spdk_pid880831 00:39:58.635 Removing: /var/run/dpdk/spdk_pid881836 00:39:58.635 Removing: /var/run/dpdk/spdk_pid882842 00:39:58.635 Removing: /var/run/dpdk/spdk_pid883517 00:39:58.635 Removing: /var/run/dpdk/spdk_pid883523 00:39:58.635 Removing: /var/run/dpdk/spdk_pid883849 00:39:58.635 Removing: /var/run/dpdk/spdk_pid883975 00:39:58.635 Removing: /var/run/dpdk/spdk_pid884112 00:39:58.635 Removing: /var/run/dpdk/spdk_pid885157 00:39:58.635 Removing: /var/run/dpdk/spdk_pid886176 00:39:58.635 Removing: /var/run/dpdk/spdk_pid887204 00:39:58.635 Removing: /var/run/dpdk/spdk_pid887874 00:39:58.635 Removing: /var/run/dpdk/spdk_pid887876 00:39:58.635 Removing: /var/run/dpdk/spdk_pid888218 00:39:58.635 Removing: /var/run/dpdk/spdk_pid889430 00:39:58.635 Removing: /var/run/dpdk/spdk_pid890745 00:39:58.635 Removing: /var/run/dpdk/spdk_pid901974 00:39:58.635 Removing: /var/run/dpdk/spdk_pid938697 00:39:58.635 Removing: /var/run/dpdk/spdk_pid944567 00:39:58.635 Removing: /var/run/dpdk/spdk_pid946543 00:39:58.635 Removing: /var/run/dpdk/spdk_pid948668 00:39:58.635 Removing: /var/run/dpdk/spdk_pid948894 00:39:58.635 Removing: /var/run/dpdk/spdk_pid948916 00:39:58.635 Removing: /var/run/dpdk/spdk_pid949239 00:39:58.635 Removing: /var/run/dpdk/spdk_pid949761 00:39:58.635 Removing: /var/run/dpdk/spdk_pid951978 00:39:58.635 Removing: /var/run/dpdk/spdk_pid953071 00:39:58.635 Removing: /var/run/dpdk/spdk_pid953601 00:39:58.635 Removing: /var/run/dpdk/spdk_pid956158 00:39:58.635 Removing: /var/run/dpdk/spdk_pid956857 00:39:58.635 Removing: /var/run/dpdk/spdk_pid957574 00:39:58.635 Removing: /var/run/dpdk/spdk_pid963301 00:39:58.635 Removing: /var/run/dpdk/spdk_pid970354 00:39:58.635 Removing: /var/run/dpdk/spdk_pid970355 00:39:58.635 Removing: /var/run/dpdk/spdk_pid970356 00:39:58.635 Removing: /var/run/dpdk/spdk_pid975529 00:39:58.635 Removing: /var/run/dpdk/spdk_pid987039 00:39:58.635 Removing: /var/run/dpdk/spdk_pid992393 00:39:58.635 Clean 00:39:58.896 13:44:21 -- common/autotest_common.sh@1453 -- # return 0 00:39:58.896 13:44:21 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:39:58.896 13:44:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:58.896 13:44:21 -- common/autotest_common.sh@10 -- # set +x 00:39:58.896 13:44:21 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:39:58.896 13:44:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:58.896 13:44:21 -- common/autotest_common.sh@10 -- # set +x 00:39:58.896 13:44:21 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:58.896 13:44:21 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:58.896 13:44:21 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:58.896 13:44:21 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:39:58.896 13:44:21 -- spdk/autotest.sh@398 -- # hostname 00:39:58.896 13:44:21 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:59.157 geninfo: WARNING: invalid characters removed from testname! 00:40:25.740 13:44:46 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:27.125 13:44:49 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:29.668 13:44:51 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:31.054 13:44:53 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:32.967 13:44:55 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:34.347 13:44:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:40:36.258 13:44:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:40:36.258 13:44:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:40:36.258 13:44:58 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:40:36.258 13:44:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:40:36.258 13:44:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:40:36.258 13:44:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:40:36.258 + [[ -n 585978 ]] 00:40:36.258 + sudo kill 585978 00:40:36.268 [Pipeline] } 00:40:36.284 [Pipeline] // stage 00:40:36.290 [Pipeline] } 00:40:36.306 [Pipeline] // timeout 00:40:36.312 [Pipeline] } 00:40:36.327 [Pipeline] // catchError 00:40:36.333 [Pipeline] } 00:40:36.349 [Pipeline] // wrap 00:40:36.356 [Pipeline] } 00:40:36.371 [Pipeline] // catchError 00:40:36.382 [Pipeline] stage 00:40:36.385 [Pipeline] { (Epilogue) 00:40:36.399 [Pipeline] catchError 00:40:36.401 [Pipeline] { 00:40:36.415 [Pipeline] echo 00:40:36.417 Cleanup processes 00:40:36.424 [Pipeline] sh 00:40:36.712 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:36.712 1303115 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:36.728 [Pipeline] sh 00:40:37.017 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:40:37.017 ++ grep -v 'sudo pgrep' 00:40:37.017 ++ awk '{print $1}' 00:40:37.017 + sudo kill -9 00:40:37.017 + true 00:40:37.030 [Pipeline] sh 00:40:37.318 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:49.585 [Pipeline] sh 00:40:49.870 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:49.870 Artifacts sizes are good 00:40:49.887 [Pipeline] archiveArtifacts 00:40:49.895 Archiving artifacts 00:40:50.057 [Pipeline] sh 00:40:50.416 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:50.431 [Pipeline] cleanWs 00:40:50.441 [WS-CLEANUP] Deleting project workspace... 00:40:50.441 [WS-CLEANUP] Deferred wipeout is used... 00:40:50.448 [WS-CLEANUP] done 00:40:50.450 [Pipeline] } 00:40:50.469 [Pipeline] // catchError 00:40:50.481 [Pipeline] sh 00:40:50.773 + logger -p user.info -t JENKINS-CI 00:40:50.784 [Pipeline] } 00:40:50.796 [Pipeline] // stage 00:40:50.800 [Pipeline] } 00:40:50.812 [Pipeline] // node 00:40:50.816 [Pipeline] End of Pipeline 00:40:50.844 Finished: SUCCESS